Experts say these 7 emerging technologies should concern us

  • As new technologies proliferate and hit the market, experts and industry leaders have raised red flags about their potential for harm.
  • Governments across the globs are just now beginning to grapple with the impacts of technologies that have been shaping our lives for years.
  • Here’s why experts say we should be concerned about facial recognition, gene editing, AI, and other technologies that are starting to hit the mainstream.
  • Visit Business Insider’s homepage for more stories.

Nearly 10 years ago, Jeff Bezos posed this rhetorical question:

“The common question that gets asked in business is, ‘why?’ That’s a good question, but an equally valid question is, ‘why not?'”

Since then, tech giants like Amazon have made huge steps in developing and rolling out technologies that push the limits of what we thought was possible.

Emerging technologies like artificial intelligence and computer-aided design have reshaped markets and made millions for the companies that employ them.

Read more:
People are hacking their Peloton bikes so they can watch Netflix and cheat the leaderboard ranking system

But in recent years, experts and industry leaders have raised red flags about the ways those technologies are being implemented, or their potential for harm.

Laws and regulations inherently move at a slower pace than technological innovation. As such, many regulatory bodies are just now beginning to grapple with the impacts of technologies that have been shaping our lives for years.

Going forward, lawmakers and Silicon Valley leaders will have to grapple with the risks surrounding new technologies as they weigh it against the omnipresent question that Bezos posed back in 2011: “Why not sell it, too?

Here are the emerging technologies that have set off alarm bells for experts.

Artificial intelligence may overpower human autonomy.


A decade ago, artificial intelligence was a truly emergent technology, with few real-world applications beyond solving checkers. Now, it’s an umbrella term describing countless individual technologies that we encounter daily – and that populate much of this list.

Concerns about AI are nothing new – dystopian anxieties about man-versus-machine have been mainstream since HAL’s villainous arc in “2001: A Space Odyssey.”

The fear that computers will one day outsmart and out-muscle humans has also been lent credence by scientists. In his book, “Brief Answers to the Big Questions,” Stephen Hawking presented a parable about humans’ belief that they will always be able to “pull the plug” on computers that become too powerful:

“People asked a computer, ‘Is there a God?’ And the computer said, ‘There is now,’ and fused the plug.”

However, experts are already voicing concerns about the way AI is being used today, citing threats to privacy and governance. A few of these AI-driven technologies are explored below.

Facial recognition technology is reshaping our conception of privacy.

Computers now know how to recognise your face and differentiate it from the faces of others – it’s how people use Snapchat filters and unlock their iPhones.

It’s also being used by an increasing number of law enforcement agencies to surveil people and catch suspects. In one of the most notable cases, Amazon has licensed its facial recognition software,Rekognition, for use by police and other government agencies, drawing backlash from privacy advocates and its own employees.

A team of researchers from NYU published a report in December warning of the risks of facial recognition technology.

“What happens when we rely on black-boxed AI systems to judge the ‘interior life’ or worthiness of human beings?” researcher Kate Crawford said in an interview with The Intercept. “Some of these products cite deeply controversial theories that are long disputed in the psychological literature, but are are being treated by AI startups as fact.”

Governments are letting computers make automated decisions that affect people’s lives.


For years, Tammy Dobbs, an Arkansas woman with cerebral palsy, received 56 hours per week of home care from a state disability program. Then, in 2016, this amount was cut without explanation to 32 hours per week, throwing her life out of balance, according to a report from the AI Now Institute at New York University.

The decision to cut Dobbs’ home care wasn’t made by a bureaucrat or lawmaker. It was made by a computer program that the state had just decided to implement.

Dobbs and other disabled citizens of Arkansas ultimately won a lawsuit against the state to reverse the AI-driven decision, but damage to their quality of life was already done, the AI Now report says.

Artificial intelligence cite Dobbs’ story as an example of the risks of AI-driven government decisions, an increasing trend across local and national agencies.

Algorithm-driven content feeds can manipulate people en masse.


The average user of Twitter, Facebook, or Instagram knows which accounts they follow and whose content they choose to interact with. But they likely have no conception of the reason why certain posts appear in their feed, or why posts appear in the order that they do.

That’s because social media feeds are increasingly determined by AI-driven algorithms that are opaque to average users.

While social media giants have an interest in using algorithms to keep users engaged or promote ads, experts warn that these algorithms have the potential to be used to spread misinformation or manipulate people.

Spyware, ransomware, and hacks are getting more advanced.


As technology becomes more sophisticated, so do the methods used by hackers to steal people’s data or hold tech infrastructure hostage for monetary gain. In recent years, hacking is only on the rise – and it’s becoming increasingly profitable.

Furthermore, experts warn that corporations and governments aren’t doing enough to protect their systems from potential hackers.

“The entirety of our computing infrastructure, including all of our finance and health systems, is an insecure, untrustworthy mess,” Emin Gün Sirer, a cryptocurrency expert and professor at Cornell University, said in an interview with VICE.

3D printing and computer-aided design have unintended side effects.


When 3D printing technology began to go mainstream in recent years, it was heralded as an innovation that would empower individuals to create products on their own.

However, this empowerment comes with downsides. One highly publicized example was the ability to 3D print guns, which is now largely illegal but still technically possible.

Additionally, experts warn that 3D printing is highly wasteful, using up a lot of energy and producing plastics and noxious emissions

Gene editing could reshape society, but the side effects are unknown.


Since the discovery of genetics, scientists have theorised about the possibility of editing peoples’ DNA to lengthen life or cure diseases. Recently, this aspiration started to become a reality, with one Chinese scientist editing two human embryos in an attempt to make them immune to AIDS.

This emerging technology is still rife with risks, however. In the case of the AIDS cure, new research suggests that the procedure to make people immune can also shave years off their life.

Tech watchdogs have also raised concerns around the prospect of people making cosmetic edits to the genomes of their unborn children in the future, arguing it could lead to a new form of eugenics.