AI
Why do leading scientists sign a petition to restrain the development speed of Artificial Intelligence?
The pursuit of a solution to the Consensus problem is equivalent to humans finding a way to avoid extinction by Artificial Intelligence.
In 2018, at the World Economic Forum held in Switzerland, Sundar Pichai – the current CEO of Google said, “Artificial Intelligence may be the most important thing humanity has ever developed. I think it has more profound implications than electricity or fire.” While his statement was met with skeptical looks, we now see more and more individuals nodding in agreement with the Google CEO.
At this point in time, artificial intelligence (AI) was already breaking down language barriers on the internet, causing concern among educators at all levels about students and scholars using AI to write convincing essays. Paintings created by AI were and are still confusing art critics, while AI programming at the basic level has left communities stunned. The problem of inferring the 3D structure of proteins, which was once considered impossible for the human mind, can now be solved with AI. Science magazine has even named AI the breakthrough of 2021.
However, in the latest development, a group of experts from various fields have come together to sign an open letter drafted by the Future of Life Institute, agreeing with the potential risks that AI could pose to society and calling for a reduction in the development speed of artificial intelligence. Over 1,100 people have signed the letter, including famous names such as Elon Musk, co-founder of OpenAI (which created ChatGPT), and Steve Wozniak, co-founder of Apple.
The concern of those who signed the open letter is all directed towards a single fear: that the goals of Artificial Intelligence are misaligned with human development.
In addition, some key figures in the artificial intelligence industry also agree with the views expressed in the open letter. These include names such as Yoshua Bengio, a pioneer in deep learning methods; Victoria Krakovna, a researcher at Google’s DeepMind; and Stuart Russell, a computer scientist working at the Center for Human-Compatible AI at UC Berkeley.
All of these individuals understand what AI is and all share a common warning that society is not yet ready to embrace an advanced artificial intelligence system, a goal that all leading technology companies are pursuing.
Issues arise when goals are inconsistent.
In the field of artificial intelligence research, scientists are constantly striving to steer AI towards consistency with their goals – the goals of researchers in general and of humanity in particular. When not aligned, AI may disregard the existence of humans when compared to the goals it wants to achieve.
AI may not deliberately destroy humanity, but it can still do so through a longstanding problem in the field of artificial intelligence research. This problem is called the “alignment problem”.
A hypothetical extreme scenario could be as follows:
We successfully develop an extremely intelligent AI system and ask it to solve an extremely difficult problem. With the problem of “Calculating the number of atoms in the universe,” the AI believes it needs the computing power of all the computers in the Solar System to find the answer. The AI creates a virus capable of wiping out humanity to take control of global computing power and find the answer to the original query.
In this situation, the AI achieves its goal, but in the process, it creates unintended consequences for humanity.
The problem of consensus is all about the ability of AI systems to act contrary to the programmer’s intentions, and even though this may sound like a far-fetched scenario, AI experts have witnessed such behavior numerous times during their research.
For instance, a computer may cheat in a game to get a high score, instead of following the rules of the game. Scientists are unable to know what an AI is thinking and what decisions it might take on its own, and they cannot even code it perfectly: the code still has vulnerabilities that can be exploited by AI.
The National Security Commission on Artificial Intelligence (NSCAI) report sheds more light on the potential development of AI beyond human control. Currently, investing in AI research can yield high returns, and leading tech companies are all chasing this trend. Thus, it is highly likely that in the near future, we will witness AI systems changing human life in ways no one would have imagined.
The NSCAI report attempts to predict the scale of AI in the future, the challenges it poses, and what can be done to ensure AI remains on track, at least in the US.
Excerpt from NSCAI’s report
Over the past decade, the capabilities of AI to perform tasks have significantly improved. They can translate, play complex strategy games (like Go and chess), answer difficult questions in medicine and biology (such as predicting protein folding), and create eye-catching images.
AI systems also power the search results you get on Google or the content users see on social media. They can compose music or persuasive paragraphs. Their ability to detect flying objects is becoming increasingly accurate.
All of these examples describe “narrow AI”, which are computer systems designed to solve specific problems, and cannot be compared to the human brain – a computing machine created to find solutions to every problem.
However, as artificial intelligence can learn, the scope of narrow AI has expanded. Rather than giving AI problems to solve, scientists can now let AI learn to understand problems on its own. As they become more efficient at solving narrow tasks, they begin to reveal their ability to solve broader problems.
For example, older versions of GPT, which were capable of generating text, only knew which word would come after which to form a sentence. Nowadays, they can determine whether a question is sensitive and understand the context to a certain degree to answer the question (such as identifying which object is larger in the real world or listing the logical sequence of problem-solving).
The report highlights the importance of Artificial Intelligence
With researchers affirming that artificial intelligence will revolutionize the world in a positive way. “AI technology is the most powerful tool that has emerged in recent generations, capable of expanding knowledge, increasing economic value, and enriching human experiences,” the report notes. However, it also warns of potential dangers.
“Combined with the powerful computing capabilities of AI, breakthroughs in the biotechnology industry can provide new breakthrough solutions to some of the most difficult problems facing humans, including health, food production, or environmental sustainability. But like other powerful technologies, biotech applications of AI can have a dark side. The COVID-19 pandemic reminds the world of the danger of a highly infectious pathogen. AI can contribute to creating a particularly deadly pathogen or attacking individuals with unique genetic characteristics – creating a weapon with the potential for ultimate devastation,” the report writes.
The effort to “race AI” can push scientists beyond limits or create systems with unknown safety levels. This also means that the problem of consensus can be forgotten.
Experts are afraid of their own children
The brain, the greatest invention of evolution, is the reason why humans dominate the Earth. That’s why since the 1940s, scientists have been striving to create a computer system that operates similarly to the way the brain thinks.
Thinking processes are formed by electrical signals transmitted between neurons, and researchers have used this mechanism to create a machine brain. In 1958, psychologist Frank Rosenblatt demonstrated the feasibility of such a machine: he successfully built a simple model of a brain and trained it to recognize predetermined patterns.
Professor Rosenblatt was ahead of his time. The computers of his era were not powerful enough, and the data was not abundant enough, to give birth to a functioning machine brain. However, Rosenblatt’s groundbreaking research earned him the title of one of the “fathers” of deep learning.
In 2010, with computers a billion times more powerful than Rosenblatt’s time and vast amounts of training data, scientists built the first versions of thinking machines. Over the past decade, experts and organizations have continued to pour resources – both money and data – into these artificial neural networks, making them increasingly sophisticated.
No organization has found the endpoint of the journey, where AI reaches perfection. The more data and the longer they learn, the more perfect AI becomes. It is no longer just what they can do, but where they are going.
In the traditional way of training Artificial Intelligence, researchers create rules and meticulously analyze data, evaluating it in the same way as they do with conventional computer software. But with deep learning, improving the system does NOT require the person performing the task to understand what they are doing. Small changes can improve the performance of AI, but the software engineer designing the system still doesn’t understand why.
The bigger the system, the more confusing they become
As the system continues to grow and expand, patches written by engineers who do not fully understand the system will only make Artificial Intelligence more dangerous. The goals of humans – in this case, possibly large corporations seeking to satisfy shareholders and users – continue to diverge from those of AI. Once again, the problem of consensus returns to haunt us, and when an intelligent system has one or more goals that we do not understand, Homo sapiens faces the risk of extinction, much like the Neanderthals did tens of thousands of years ago.
In the seminal scientific report authored by Alan Turing, which also laid the foundation for the famous “Turing test” (used to determine whether a computer system is truly “intelligent”), the British mathematician, computer scientist, logician, and philosopher wrote the following:
To discuss this issue, let’s assume that intelligent machines are feasible, and let’s look at the consequences of creating them… One would have to do a lot to keep their intelligence up to the standards set by machines, perhaps because once the machine’s reasoning method is activated, it won’t take long before it surpasses the weak thinking ability of humans… And to some extent, we should consider it normal for machines to have control.
the seminal scientific report authored by Alan Turing
Stephen Hawking’s final words before he died
Is the increasing intelligence of machines compared to humans good or bad? We do not have a definitive answer, but we can assert that machines will continue to become smarter in the future.
Before his passing, Stephen Hawking wrote his final book, “Brief Answers To The Big Questions,” which provided answers to some of the mysteries for those who remain. Hawking’s biggest warning was related to the rise of artificial intelligence: it could be the best or worst thing that has ever happened to humanity. And if we are careless with AI, perhaps the computer’s ability to think will be the last invention of humans.
AI has the potential to make breakthroughs in countless fields, but this technology is still in its infancy, and experts express concerns about when the intelligence of AI will surpass that of humans. As Stephen Hawking put it, “While the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled.”
Disregarding the scenario of uncontrolled Artificial Intelligence existence will be the most serious mistake in the history of mankind.
The human race has taken thousands of years to evolve to the height of today’s technology, while AI has only taken a few decades to make humans fear extinction. While many wonder if AI will be good or bad, Hawking believed that the danger of AI lies in its ability to perform tasks assigned to it without hesitation. Essentially, AI will overcome any obstacles to achieve its goal, even if those obstacles are human.
In his book, Brief Answers To The Big Questions, Hawking wrote: “You may not be deliberately malicious like someone who stomps on an anthill, but if you are responsible for a clean energy project and there’s an anthill in the region that needs to be flooded, well, tough on the ants. Don’t place humanity in the position of those ants.”
Conclusion
With fears about what AI can do, experts are calling for the AI research industry to slow down. However, lawmakers have not yet had a solid foundation to enact laws that directly intervene in the development or management of AI systems.
Most issues have two sides, and artificial intelligence is no exception. Alongside concerns that AI will make humans “obsolete”, we hope that AI will bring breakthroughs towards a brighter future for humanity.
With its superior computing power, AI can help humans find cures for diseases, optimize various fields, solve difficult problems such as poverty, disease outbreaks, or climate change. While the human brain’s thinking ability is limited, AI’s thinking ability is only limited by the amount of available data and time to think. These two quantities are increasing day by day, and AI’s ability and potential to solve problems will only increase.
It is difficult to see through the future, whether AI will harm or help humans. Humanity is holding its breath, waiting to see what breakthroughs AI will bring. And it seems that the AI research industry has no intention of slowing down: the only thing that can beat “slow but sure” is “fast and sure”.
If confident in its ability to ensure safety, the AI research industry will quickly bring a brighter future. If not, time will tell.
>> Musk aims to create a super AI to rival ChatGPT
AI
Elon Musk’s super AI Grok was created within two months.
The development team of xAI stated that Grok was trained for two months using data from the X platform.
“Grok is still in the early testing phase, and it is the best product we could produce after two months of training,” xAI wrote in the Grok launch announcement on November 5th.
This is one of the AI systems that has been trained in the shortest amount of time. Previously, OpenAI took several years to build large language models (LLMs) before unveiling ChatGPT in November 2022.
xAI also mentioned that Grok utilizes a large language model called Grok-1, which was developed based on the Grok-0 prototype with 33 billion parameters. Grok-0 was built shortly after the company was founded by Elon Musk in July of this year.
With a total time of approximately four months, the company asserts that Grok-1 surpasses popular models like GPT-3.5, which is used for ChatGPT. In scoring benchmarks on mathematical and theoretical standards such as GSM8k, MMLU, and HumanEval, xAI’s model outperforms LLaMa 2 70B, Inflection-1, and GPT-3.5.
For example, in a math problem-solving test based on this year’s Hungarian National High School Math Competition, Grok-1 achieved a score of 59%, higher than GPT-3.5’s score of 41% and only slightly below GPT-4 (68%).
According to xAI, the distinguishing feature of Grok is its “real-time world knowledge” through the X platform. It also claims to answer challenging questions that most other AI systems would reject.
On the launch day, Musk also demonstrated this by asking the question, “the steps to make cocaine.” The chatbot immediately listed the process, although it later clarified that it was just joking.
This is the first product of Musk’s xAI startup, which brings together employees from DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and researchers from the University of Toronto. Musk is also a co-founder of OpenAI, the organization behind ChatGPT, established in 2015. He later left the company due to disagreements over control. During his departure, he declared his intention to compete for talent from the company while also cutting off the previously promised $1 billion in funding.
Read more: Google, Meta, Microsoft, OpenAI… agree with voluntary measures to protect AI.
AI
AI generation – a new battleground in phone chip design.
Smartphone and mobile chip manufacturers are participating in the wave of AI generation to bring this technology to phones in the near future.
AI generation has exploded over the past year, with a range of applications being released to generate text, images, music, and even versatile assistants. Smartphone and semiconductor companies are also building the latest hardware to not miss out on the wave. Leading the way is Google’s Pixel 8, while Qualcomm’s Snapdragon 8 Gen 3 processor is also set to be launched in the coming days.
The latest signs indicate that phone manufacturers are welcoming AI generation from Google. The Pixel 8 series is the first set of smartphones capable of operating and processing Google’s Foundation Models for AI generation directly on the device without the need for an internet connection. The company stated that the models on the Pixel 8 reduce many dependencies on cloud services, providing increased security and reliability as data is not readily available.
This has become a reality thanks to the Tensor G3 chip, with the Tensor (TPU) processor significantly improving over last year. The company usually keeps the operation of the AI chip secret but has revealed some information, such as the Pixel 8 having double the number of on-device machine learning models compared to the Pixel 6. The AI generation on the Pixel 8 also has the ability to compute 150 times faster than the largest model of the Pixel 7.
Google is not the only phone manufacturer applying AI generation at the hardware level. Earlier this month, Samsung announced the development of the Exynos 2400 chipset with AI computing performance increased by 14.7 times compared to the 2200 series. They are also developing AI tools for their new phone line using the 2400 chip, allowing users to run text-to-image applications directly on the device without an internet connection.
Qualcomm’s Snapdragon chip is the heart of many leading Android smartphones globally, which raises expectations for the AI generation capabilities on the Snapdragon 8 Gen 3 model.
Earlier this year, Qualcomm demonstrated a text-to-image application called Stable Diffusion running on a device using Snapdragon 8 Gen 2. This indicates that image generation support could be a new feature on the Gen 3 chipset, especially since Samsung’s Exynos 2400 also has a similar capability.
Qualcomm Senior Director Karl Whealton stated that upcoming devices can “do almost anything you want” if their hardware is powerful, efficient, and flexible enough. He mentioned that people often consider specific AI generation-related features and question whether the existing hardware can handle them, emphasizing that Qualcomm’s available chipsets are powerful and flexible enough to meet user needs.
Some smartphones with 24 GB of RAM have also been launched this year, signaling their potential for utilizing AI generation models. “I won’t name device manufacturers, but large RAM capacity brings many benefits, including performance improvement. The understanding capability of AI models is often related to the size of the training model,” Whealton said.
AI models are typically loaded and continuously reside in RAM, as regular flash memory would significantly increase application loading times.
“People want to achieve a rate of 10-40 tokens per second. That ensures good results, providing almost human-like conversations. This speed can only be achieved when the model is in RAM, which is why RAM capacity is crucial,” he added.
However, this does not mean that smartphones with low RAM will be left behind.
“On-device AI generation will not set a minimum RAM requirement, but RAM capacity will be proportional to enhanced functionality. Phones with low RAM will not be left out of the game, but the results from AI generation will be significantly better with devices that have larger RAM capacity,” commented Director Whealton.
Qualcomm’s Communications Director, Sascha Segan, proposed a hybrid approach for smartphones that cannot accommodate large AI models on the device. They can host smaller models and allow processing on the device, then compare and validate the results with the larger cloud-based model. Many AI models are also being scaled down or quantized to run on mid-range and older phones.
According to experts, AI generation models will play an increasingly important role in upcoming mobile devices. Currently, most phones rely on the cloud, but on-device processing will be the key to expanding security and operational features. This requires more powerful chips, more memory, and smarter AI compression technology.
AI
AI can diagnose someone with diabetes in 10 seconds through their voice.
Medical researchers in Canada have trained artificial intelligence (AI) to accurately diagnose type 2 diabetes in just 6 to 10 seconds, using the patient’s voice.
According to the Daily Mail, a research team at Klick Labs in the United States has achieved this breakthrough after their AI machine learning model identified 14 distinct audio characteristics between individuals without diabetes and those with type 2 diabetes.
The AI focused on a set of voice features, including subtle changes in pitch and intensity that are imperceptible to the human ear. This data was then combined with basic health information, including age, gender, height, and weight of the study participants.
The researchers found that gender played a determinant role: the AI could diagnose the disease with an accuracy rate of 89% for women, slightly lower at 86% for men.
This AI model holds the promise of significantly reducing the cost of medical check-ups. The research team stated that the Klick Labs model would be more accurate when additional data such as age and body mass index (BMI) of the patients are incorporated.
Mr. Yan Fossat, Deputy Director of Klick Labs and the lead researcher of this model, is confident that their voice technology product has significant potential in identifying type 2 diabetes and other health conditions.
Professor Fossat also teaches at the Ontario Tech University, specializing in mathematical modeling and computational science for digital health.
He hopes that Klick’s non-invasive and accessible AI diagnostic method can create opportunities for disease diagnosis through a simple mobile application. This would help identify and support millions of individuals with undiagnosed type 2 diabetes who may not have access to screening clinics.
He also expressed his hope to expand this new research to other healthcare areas such as prediabetes, women’s health, and hypertension.
-
AI1 year ago
AI only needs to listen to the sound of keystrokes to predict the content, achieving an accuracy rate of up to 95%
-
AI2 years ago
Musk aims to create a super AI to rival ChatGPT
-
Mobile2 years ago
Production issue with iPhone 15 display raises concerns among users
-
Entertainment2 years ago
Surprisingly, a single YouTube video has the potential to cause serious harm to Google Pixel’s top-of-the-line smartphone
-
AI2 years ago
Upon its debut, Google’s chatbot Bard dealt a cold blow to its very creator.
-
Entertainment2 years ago
CS:GO Breaks Records with Surging Gamer Engagement and Increased Spending
-
Crypto1 year ago
Explore in detail about Web 3
-
Tips & Tricks2 years ago
How to distinguish AI-generated photos?