Banner Default Image
Back to blogs

Is AI safe for use within Technology and Defense Sectors?

Posted-on August 2023 By Anya Constantinescu

Blog Img

Artificial Intelligence (AI) is currently at the frontier of new and emerging technologies, surpassing its competitors such as Blockchain and the Internet of Things in notoriety and controversy. Whilst the government continues to research, develop, and experiment with AI in a bid to revolutionise the capabilities of the Armed Forces, and industries continue to harness its power to automate tasks, the vagaries surrounding this neoteric technology remain impossible to overlook.

iO Associates are committed to engaging with the latest tech trends in order to expand and develop our audience. With this in consideration last week iO launched a poll on our UK LinkedIn pageto ask whether our community thought Artificial Intelligence is safe to use both now and in the future, or not. The results came back conclusively with the majority (68%) believing that the use of AI is not safe. In light of these findings, let's delve into the latest research on AI's safety in both the technology and defense sectors.

Emerging technology is increasingly central to defense strategy, becoming the new cornerstone on the technological forefront. But sectors such as weapon automation pose questions of morality despite their innovation. These questions focus on the possibility of delegating lethal decisions to AI-powered autonomous machines which could potentially lead to uncontrollable and catastrophic scenarios. Although this concern inclines to feel far off and worryingly futuristic, the reality is that Artificial Intelligence has become indispensable for defense and attack.

The exponential growth of AI technologies is paralleled by an alarming escalation in cybersecurity concerns, particularly cybersecurity and hacking attacks. Increasingly sophisticated AI lends itself as a double-edged sword and can lead to higher security risks and maliciously targeted damage. These attacks could potentially bypass security measures and exploit vulnerabilities in systems. Currently, there is no comprehensive federal legislation dedicated solely to AI regulation in the US, the UK has a similar approach and also lacks dedicated AI regulations. Instead, pre-existing organizations oversee the safety of AI technology. The EU’s legislation, however, has taken a proactive stance and categorizes AI applications by risk levels, from low-risk AI games to high-risk credit score evaluation systems.

While AI holds the promise of transforming industries, there is a fine line between leveraging its power and becoming overly reliant. Overdependence on AI technologies might inadvertently erode critical human skills such as creativity, intuition, and critical thinking. Striking a healthy balance between AI-assisted decision-making and human input is vital to preserving and nurturing our cognitive abilities.

Moreover, there's a risk of AI contributing to economic inequality by disproportionately benefiting wealthy individuals and large corporations. Job losses triggered by AI-driven automation are more likely to affect low-skilled workers, increasing income inequality and limiting opportunities for social mobility. Goldman Sachs report suggests AI could replace 300 million full-time jobs globally, affecting industries from architecture to management.

Nevertheless, it remains essential to acknowledge that AI tech also offers a multitude of benefits within the defense sector and as agreed by the 33% of people who believe that AI is safe to use. Artificial Intelligence powered robots are available to work 24 hours a day across seven days a week requiring no rest time or work-life balance. Many studies have found that humans are most productive for only 3 to 4 hours each day fulfilling tasks at a much slower rate. AI can enhance decision-making processes, optimise resource allocation, and improve situational awareness. From predictive maintenance of equipment to data analysis for strategic planning, AI-driven solutions can significantly bolster defense capabilities.

While AI technology offers immense promise and potential across various sectors, including defense, it is not without its unique safety risks. The landscape is complex and multifaceted, from ethical considerations in defense, to job displacement, economic inequality, and legal challenges. Striking the right balance between harnessing AI's capabilities and mitigating its risks is essential for a future where AI serves the best interests of humanity while avoiding potential pitfalls. As AI continues its rapid evolution, it's imperative for governments, industries, and society as a whole to engage in thoughtful and informed discussions about its responsible and ethical implementation.

What are your thoughts? Do you think AI technology is safe to use or should we cease further investigation to mitigate potential catastrophe, let us know by getting in touch or connecting with our iO Associates US page on LinkedIn here.