Of all the tech trends that have mesmerized our world over the past decade, only a few can match the hype about artificial intelligence (AI). Popularized as a recurrent theme in science-fiction movies in the 80s and 90s, AI is now a central part of everyday life. It powers our autonomous cars, helps us with our Netflix recommendations, and is even expected to add almost $16 trillion to the global economy by 2030.
Dozens of startups and established companies in Silicon Valley and across the world have invested billions into AI technologies, with Microsoft, IBM, and Google among the big tech companies that are digging in.
Still, AI faces a ton of challenges in the real world that make many of its promised offerings feel like hyped-up concepts. Overcoming these challenges is increasingly becoming a priority for the tech industry, largely because our successes and failures when addressing these challenges will determine how well we can separate the hype from the potential.
1. Lack of computing power
AI algorithms, even the most basic ones, require a tremendous amount of processing power—both human and machine. Basic AI systems often work by processing thousands of data points every second, which requires the best computing infrastructure for both development and use of AI technologies.
Before AI became the disruptive force it is today, most of its concepts were theories and fodder for Hollywood’s blockbuster movies. But as cloud computing and parallel processing systems become more advanced, it suddenly became possible to process the massive amounts of data, which helped drive innovations in machine learning, natural language processing, and deep learning.
However, as the volume of data continues to grow, current systems won’t be able to keep up, thus slowing down developments in AI. Possible solutions include the development of quantum computing and advanced supercomputers, which might still take years to materialize.
Nearly two decades after the internet became a worldwide phenomenon, online security is still a pain for the digital community. Cybercriminals are getting better at their craft, increasingly putting business and home security in jeopardy.
But while AI has often been heralded as the savior when it comes to identifying threats and securing networks, many cybersecurity experts still believe the technology can also be used as part of a hacker’s toolkit.
In a study conducted by Cylance, 62 percent of cybersecurity experts think AI technologies will be used to carry out cyber attacks within the coming year. These experts believe that AI could be used to create advanced offensive tactics, which could help attackers gain entry into AI-powered systems anywhere in the world.
A hacked AI system can give infiltrators access to whole networks—for instance, financial systems—which can result in unprecedented economic losses. As such, cybersecurity infrastructure and policies must be beefed up before any AI systems are deployed, which will be priorities for governments and corporations as AI spreads across industries.
3. Trust and ethical issues
As AI plays a bigger role in everyday life, concerns about trust and the ethical ramifications of AI technologies continue to grow.
A typical AI system is usually complex, with multiple algorithms working together to produce intelligent results for say, a Siri search or a Netflix recommendation. This makes scrutiny of the underlying processes difficult, making it almost impossible to carry out audits to enable data-based trust.
So without scrutiny and an industry-wide certification standard, it’s also impossible to guarantee the safety of these systems, which would lead to difficulties in adopting in mission-critical operations in industries such as healthcare.
Additionally, as AI becomes commonplace in workplaces in the form of robots, jobs for humans will most likely be lost to these automated workers, with those who retain their jobs forced to work alongside the robots. A good example is Foxconn Technology Group in China that recently announced plans to replace 60,000 workers with robots.
In the end, automation will have a huge impact on the physical and mental health of workers who interact with AI. When factories close due to automation, there’s always the increased risk of substance abuse, depression, suicide, and other psychosocial ailments.
Tackling the ethical issues surrounding AI will be vital to its growth, which is why companies like Google and IBM have brought on ethics advisors and boards to monitor AI development and implementation.
DISCLAIMER: This article expresses my own ideas and opinions. Any information I have shared are from sources that I believe to be reliable and accurate. I did not receive any financial compensation in writing this post, nor do I own any shares in any company I’ve mentioned. I encourage any reader to do their own diligent research first before making any investment decisions.
Fed can’t see the bubbles through the lather
With global debt on the rise, there are concerns that central banks are propping up a bubble. High debt levels...
Is this the beginning of the end for Aurora Cannabis?
Aurora cannabis is considered one of the most important cannabis companies but appears to be struggling. Oversupply and underwhelming demand...
The Singapore fintech festival puts 5 startup leaders in the spotlight
Singapore Fintech Festival is the largest financial event in the world. It provides a platform for the Fintech community to...
Forbes has ranked the largest companies in the world
The 17th ranking made by "Forbes" showcases the list of the largest companies in the world. It includes Wells Fargo...
Wall Street is using climate resiliency as a new risk metric
Climate change has always been hot button political topic but now wallstreet is beginning to sit up and take note....