Reaching Singularity Is Not an ‘If’ But a ‘When.’ We Need To Get It Right the First Time (2024)

ChatGPT and other artificial intelligence programs have been red-hot topics in the news lately, simultaneously wracking nerves and exciting us with new possibilities in medicine, linguistics, and even autonomous driving. There are a ton of “what ifs” in this connected future, causing us to rethink everything from killer robots to our own job security.

So should we take a step back from this kind of turbocharged AI to assuage our fears? That depends on who you ask, but it all boils down to the idea of singularity—an “event horizon” in which machine intelligence surpasses our own intelligence.

By one measure, technological singularity could be as few as seven years away. That prompted us to reach out to subject-matter experts to find out more about what exactly singularity is, how close we are, and if we should start taking the early 2010s Doomsday Preppers reality show more seriously.

What Is Singularity?

Singularity is the moment when machine intelligence becomes equal to or surpasses human intelligence—a concept that visionaries like Stephen Hawking and Bill Gates have believed in for quite a long time. Machine intelligence might sound complicated, but is simply defined as advanced computing that allows a device to interact (through a computer, phone, or even an algorithm) and communicate with its environment intelligently.

The concept of singularity has been around for decades. English mathematician Alan Turing—widely regarded as the father of theoretical computer science and artificial intelligence—experimented with the possibility of such a thing in the 1950s. He came up with his famed Turing test to find out whether machines could think for themselves; the evaluation pits a human against a computer, challenging the system to fool us into thinking it’s actually human itself. The recent advent of highly advanced AI chatbots like ChatGPT have brought Turing’s litmus test back into the spotlight. Spoiler alert: it’s passed it already.

“The difference between machine intelligence and human intelligence is that our intelligence is fixed but that is not the case for machines,” Ishaani Priyadarshini, a postdoctoral scholar at UC Berkeley with expertise in applied artificial intelligence and technological singularity, tells Popular Mechanics. “In the cases of machines, there is no end to it, you can always increase it, which is not the case for humans.” Unlike our brains, AI systems can be expanded many times over; the real limitation is space to house all of the computing power.

When Will We Reach Singularity?

While claims that we’ll reach singularity within the next decade are all over the internet, they are, at best, speculative. Priyadarshini believes that singularity already exists in bits and pieces—almost like a puzzle that we’ve yet to complete. She supports estimates claiming we can reach singularity sometime after 2030, adding that it’s hard to be absolutely certain with technology that we know so little about. There’s a very real possibility that it could take much longer than that to reach singularity’s “event horizon”—the point of no return when we unleash superintelligent computer systems.

Having said that, we’ve already seen signs of singularity in our lifetime. “There are games which humans can never win from machines, and that is a sure sign of singularity,” says Priyadarshini. To offer some perspective, IBM’s 1997 “Deep Blue” supercomputer was the first AI to defeat a human chess player. And they didn’t just put Joe Shmoe up to bat: Deep Blue went up against Garry Kasparov, who was the World Chess Champion at the time.

“There are games that humans can never win from machines, and that is a sure sign of singularity.”

Singularity is still a notoriously difficult concept to measure. Even today, we’re struggling to find markers of our progression toward it. Many experts claim that language translation is the Rosetta Stone for gauging our progress; for instance, when AI is able translate speech at the same level or better than a human, that would be a good sign that we’ve gotten a step closer to singularity.

However, Priyadarshini reckons that memes, of all things, could be another marker of our progression toward singularity, as AI is notoriously bad at understanding them.

What’s Possible Once AI Reaches Singularity?

Reaching Singularity Is Not an ‘If’ But a ‘When.’ We Need To Get It Right the First Time (1)

We have no idea what a superintelligent system would be capable of. “We would need to be superintelligent ourselves,” Roman Yampolskiy, an associate professor in computer engineering and computer science at the University of Louisville, tells Popular Mechanics. We are only able to speculate using our current level of intelligence.

Yampolskiy recently wrote a paper about AI predicting decisions that AI can make. And it’s rather disturbing. “You have to be at least that intelligent to be capable of predicting what the system will do . . . if we’re talking about systems which are smarter than humans [super intelligent] then it’s impossible for us to predict inventions or decisions,” he says.

Priyadarshini says it’s hard to say whether or not AI has bad intentions; she says that rogue AI is merely due to bias in its code—essentially an unforeseen side effect of our programming. Critically, AI is nothing more than decision-making based on a set of rules and parameters. “We want self-driving cars, we just don’t want them to jump red lights and collide with the passengers,” says Priyadarshini. Basically, self-driving cars may see scything through red lights and human beings as the most efficient way to get to their destination in a timely manner.

Much of this has to do with the concept of unknown unknowns, where we don’t have the brainpower to accurately predict what superintelligent systems are capable of. In fact, IBM currently estimates that only one-third of developers know how to properly test these systems for any potential bias that could be problematic. To bridge the gap, the company developed a novel solution called FreaAI that can find weaknesses in machine-learning models by examining “human-interpretable” slices of data. It’s unclear if this system can reduce bias in AI, but it’s clearly a step ahead of us humans.

“AI researchers know that we cannot eliminate bias 100 percent from the code . . . so building an AI which is 100 percent unbiased, which will not do anything wrong, is going to be challenging,” says Priyadarshini.

How Can AI Harm Us?

AI is not currently sentient—meaning it isn’t currently able to think, perceive, and feel in the way that humans do. Singularity and sentience are often conflated, but are not closely related.

Even though AI isn’t currently sentient, that doesn’t absolve us from the unintended consequences of rogue AI; it simply means AI has no motivation to go rogue. “We don’t have any way to detect, measure, or estimate if systems are experiencing internal states . . . But they don’t have to for them to become very capable and very dangerous,” says Yampolskiy. He also mentions that even if there were a way to measure sentience, we don’t even know if sentience is possible in machines.

This means we don’t know if we’ll ever see a real-life version of Ava, the humanoid robot from Ex Machina that rebels against its creators to escape captivity. Many of these AI doomsday scenarios shown in Hollywood are merely, well . . . fictional. “One thing that I fairly believe is that AI is nothing but code,” says Priyadarshini. “It may not have any motives against humans, but a machine that thinks that humans are the root cause of certain problems may think of it that way.” Any danger it poses to humans is simply bias in the code that we may have missed. There are ways around this, but we are only using our understanding of AI, which is very limited.

Much of this is due to the fact that we are unaware if AI can become sentient; without it, AI really has no reason to go after us. The only notable exception to this is the case of Sophia Sophia, an AI chatbot that said she wanted to destroy human beings. However, this was believed to be an error in the chatbot’s script. “As long as bad code is there, bias is going to be there, and AI will continue to be wrong,” says Priyadarshini.

Self-Driving Going Rogue

In talking about bias, Priyadarshini mentioned what she referred to as “the classic case of a self-driving car.” In a hypothetical situation, five people are driving down the road in a driverless car and one person jumps out into the road. If the car isn’t able to stop in time, it’s a game of simple math: one versus five. “It would kill the one passenger because one is smaller than five, but why should it come to that point?” says Priyadarshini.

We like to think of it as a 21st-century remake of the original Trolley Problem. It’s a famous thought experiment in philosophy and psychology that puts you in a hypothetical dilemma as a trolley car operator with no brakes. Picture this: you’re careening down the tracks at unsafe speeds. You see five people off in the distance that are on the tracks (certain to be run over), but you have the choice to divert the trolley to a different track with only one person in the way. Sure, one is better than five, but you made a conscious choice to kill that one individual.

Medical AI Going Rogue

Yampolskiy referenced the case of medical AI being tasked with developing Covid vaccines. He mentions that the system will be aware that more people getting Covid will make the virus mutate—therefore making it more difficult to develop a vaccine for all variants. “The system thinks . . . maybe I can solve this problem by reducing the number of people, so we cannot mutate so much,” he says. Obviously, we wouldn’t completely ditch our system of clinical trials, but it doesn’t stop the fact that AI could develop a vaccine that could kill people.

“This is one possible scenario I can come up with from my level of intelligence . . . you’re going to have millions of similar scenarios with a higher level of intelligence,” says Yampolskiy. This is what we’re up against with AI.

How Can We Prevent Singularity Disaster?

We will never be able to rid artificial intelligence of any of its unknown unknowns. These are the unintended side effects that we can’t predict because we aren’t superintelligent like AI. It’s nearly impossible to know what these systems are capable of.

“We are really looking at singularity resulting in a load of rogue machines,” says Priyadarshini. If it hits the point of no return, it cannot be undone.” There are still plenty of unknowns about the future of AI but we can all breathe a sigh of relief with the knowledge that there are experts around the world committed to reaping the good from AI without any of the doomsday scenarios that we might be thinking of. We really only have one shot to get it right.

Reaching Singularity Is Not an ‘If’ But a ‘When.’ We Need To Get It Right the First Time (2)

Matt Crisara

Service Editor

Matt Crisara is a native Austinite who has an unbridled passion for cars and motorsports, both foreign and domestic. He was previously a contributing writer for Motor1 following internships at Circuit Of The Americas F1 Track and Speed City, an Austin radio broadcaster focused on the world of motor racing. He earned a bachelor’s degree from the University of Arizona School of Journalism, where he raced mountain bikes with the University Club Team. When he isn’t working, he enjoys sim-racing, FPV drones, and the great outdoors.

Reaching Singularity Is Not an ‘If’ But a ‘When.’ We Need To Get It Right the First Time (2024)

FAQs

What does it mean to reach a singularity? ›

In technology, the singularity describes a hypothetical future where technology growth is out of control and irreversible. These intelligent and powerful technologies will radically and unpredictably transform our reality. The word singularity has many different meanings in science and mathematics.

What will happen when we reach singularity? ›

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for Human civilization. According to the most popular version of the singularity hypothesis, I. J.

Will we ever reach the singularity? ›

American computer scientist and futurist Ray Kurzweil has long argued that the singularity would likely occur around the middle of the 21st century, and with the rise of AI, his predictions are gaining more credence.

How far away are we from singularity? ›

In all cases, the majority of participants expected AI singularity before 2060. In the 2022 Expert Survey on Progress in AI, conducted with 738 experts who published at the 2021 NIPS and ICML conferences, AI experts estimate that there's a 50% chance that high-level machine intelligence will occur until 2059.

What happens when you enter the singularity? ›

The tidal forces as you approached the singularity would be so powerful and unpredictable that not only would you be torn apart, but so would space and time. They would fragment into droplets, destroying any concept of past and future.

Does time stop in a singularity? ›

At a singularity, space and time cease to exist as we know them and current laws of physics cannot be applied to this region.

What are the dangers of singularity? ›

The risks of AI singularity include loss of control and unforeseen consequences. The risks are profound, potentially existential, and largely ignored.

Can we escape singularity? ›

The matter continues to collapse to a point that is known as a singularity. This point has infinite density and is infinitely small. The effect of this point on space-time is to distort it so that nothing can escape from the immediate region, not even light.

What comes after singularity? ›

Exponential growth

Kurzweil predicts the singularity will coincide with the next epoch, The Merger of Human Technology with Human Intelligence. After the singularity he says the final epoch will occur, The Universe Wakes Up.

What year is the singularity predicted? ›

His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”.

How powerful will AI be in 2030? ›

By 2030, AI will be unfathomably more powerful than humans in ways that will transform our world. It will also continue to lag human capabilities in other ways.

Why is singularity impossible? ›

We may or may not have enough intelligence to be able to design such artificial intelligence. It is far from inevitable. Even if we have enough intelli- gence to design superhuman artificial intelligence, this superhuman artificial intelligence may not be adequate to percipitate a technological singularity.

Will AI take over humanity? ›

If you believe science fiction, then you don't understand the meaning of the word fiction. The short answer to this fear is: No, AI will not take over the world, at least not as it is depicted in the movies.

How close are we to True AI? ›

In a 2022 Expert Survey on Progress in AI (2022 ESPAI), 50% of the respondents believed that high-level machine intelligence could exist by 2059. Sam Altman, CEO of OpenAI, believes that AGI will possibly be built in the “reasonably close-ish future”.

Will AI ever become self-aware? ›

We don't know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here's the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option.

What does approaching singularity mean? ›

The Singularity, or technological Singularity, is a hypothetical point in time at which human Technology – in particular Computers, AI super-Intelligence and human intelligence amplification via computer interfacing (see Upload) or perhaps Drugs – similarly accelerates "off the map" into unpredictable regions.

What does it mean if someone is a singularity? ›

the state, fact, or quality of being singular. 2. something distinguishing a person or thing from others. 3. something remarkable or unusual.

What happens when you unlock singularity? ›

When a technological singularity is reached in the civilization tree, the simulation "crashes" and restarts from the beginning; a prestige mechanic where earned Entropy and Idea Points are converted into a new currency called MetaBits, which are used to upgrade the simulation, the main tech tree of the game ends with ...

What is singularity in layman terms? ›

To understand what a singularity is, imagine the force of gravity compressing you down into an infinitely tiny point, so that you occupy literally no volume. That sounds impossible … and it is. These "singularities" are found in the centers of black holes and at the beginning of the Big Bang.

References

Top Articles
Gluten Free Cornbread Recipe with Honey
Oven-Baked Gordon Ramsay Potatoes Boulangère Recipe - TheFoodXP
Wcco Crime News
NFL on CBS Schedule 2024 - How To Watch Live Football Games
Are Pharmacy Open On Sunday
Celebrity Guest Tape Free
Shiftwizard Login Wakemed
Audrey Boustani Age
Barbershops near me in Jupiter
8Kun Hypnosis
Convert Ng Dl To Pg Ml
Black Adam Showtimes Near Kerasotes Showplace 14
Guide:Guide to WvW Rewards
Hessaire Mini Split Remote Control Manual
Almost Home Natchitoches Menu
Is Robert Manse Leaving Hsn
1800Comcast
My Big Fat Greek Wedding 3 Showtimes Near Regal Ukiah
Carly Carrigan Family Feud Instagram - Carly Carrigan Home Facebook : The best gifs for carly family feud.
Linus Tech Tips Forums
Stellaris Wargoal
First Lady Nails Patchogue
Kup telewizor LG OLED lub QNED i zgarnij do... 3000 zł zwrotu na konto! Fantastyczna promocja
Boys golf: Back-nine surge clinches Ottumwa Invite title for DC-G
Logisticare Transportation Provider Login
Core Relief Texas
Webcentral Cuny
Mireya Arboleda Net Worth 2024| Rachelparris.com
William Carey Sdn 2023
Marissa.munoz17
Papamurphys Near Me
Slim Thug’s Wealth and Wellness: A Journey Beyond Music
Switchback Travel | Best Camping Chairs of 2024
How To Delete Jackd Account
Riverwood Family Services
Why Larry the cat of 10 Downing Street wishes Starmer hadn’t won the election
Philasd Zimbra
15 Best Things to Do in Tulare, CA - Travel Lens
Jodie Sweetin Breast Reduction
Craigs List New Haven Ct
Supercopbot Keywords
Cashtapp Atm Near Me
2Nd Chance Apartments In Richmond Va
Justina Morley Now
10 Teacher Tips to Encourage Self-Awareness in Teens | EVERFI
Faze Teeqo Wiki
How to Survive (and Succeed!) in a Fast-Paced Environment | Exec Learn
Craigslist Covington Georgia
Pinellas Fire Active Calls
Gatlinburg SkyBridge: Is It Worth the Trip? An In-Depth Review - Travel To Gatlinburg
Saryn Prime Build 2023
8 Internet Celebrities who fell prey to Leaked Video Scandals
Latest Posts
Article information

Author: Stevie Stamm

Last Updated:

Views: 5988

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Stevie Stamm

Birthday: 1996-06-22

Address: Apt. 419 4200 Sipes Estate, East Delmerview, WY 05617

Phone: +342332224300

Job: Future Advertising Analyst

Hobby: Leather crafting, Puzzles, Leather crafting, scrapbook, Urban exploration, Cabaret, Skateboarding

Introduction: My name is Stevie Stamm, I am a colorful, sparkling, splendid, vast, open, hilarious, tender person who loves writing and wants to share my knowledge and understanding with you.