THIS year we celebrate the 70th anniversary of the Geneva Conventions and it is certain that new technology weapons, artificial intelligence (AI) and cyber will shape human warfare in the next 70 years.
I will give a humanitarian perspective on this latest revolution in military affairs by reflecting generally on our relationship with technology as human beings, and defining the humanitarian challenges in AI, cyber and their ambiguity as weapons.
What is the relationship between humans and technology? As a species of animal we humans are exceptional in our relationship with technology.
Our imagination, reasoning and creativity sets us apart. We are always inventing things and making things.
Much of our history is the story of how we harness powers beyond us – animal, vegetable and physical – to our own human purposes.
Through most of our existence as homo sapiens we have merged ourselves with these non-human powers to operate mostly in hybrid form.
We rode horses to go faster. We made spears and rifles to kill at a distance. We vaccinated ourselves to fight off microscopic predators.
Today we are deeply attached to computers which enable us to reach across time and space, and make complex calculations and connections at incredible speed.
Our new cyber-humanity is set to dictate our next era as a species as we increasingly live in human-machine interactions in all parts of our lives.
These human-machine interactions will become increasingly integrated as we move from hand-held devices to more sophisticated “interfaces” that respond instantly to our five senses, and are increasingly implanted and embedded in our bodies.
Our relationship with our technology has always been ambivalent – enabling good and bad outcomes. Our very first experiments with fire – a symbol of our creativity – made clear that we can use flames to warm or burn one another.
Human technology is obviously not a simple thing. It is neither purely good nor purely bad. Its moral value depends on how we use it.
We can expect this ambivalence to continue as we use AI and cyber increasingly in war. These new capabilities will have the power to make war better and worse.
What is this new technology that we call artificial intelligence (AI) and cyber? Artificial intelligence is the use of computers to carry out tasks previously requiring human intelligence, cognition or reasoning, and often going beyond what is humanly possible.
This often leads to machine learning in which AI systems use large amounts of data to develop their functioning and “learn” from experience.
These AI systems are typically based on algorithms, a model of mathematical formulae originally developed by the ninth century Muslim scholar from Persia, Al Khawarizmi, and named after him.
So we have the Islamic enlightenment to thank for this great digital leap forward!
Cyber is a word drawn from twentieth century science fiction to refer generally to the computerized or virtual space generated and maintained by computers.
It is also important to define what we mean by a weapon at this point because much of the ethical and legal dispute around AI and cyber turns on whether these systems are weapons per se or some form of non-human combatant complete with their own autonomy and decision-making.
The sword is perhaps the archetypal weapon. It is an inanimate object made by human hands and only ever operable in human hands.
A sword is hammered into shape, sharpened into a blade and then held and wielded as a deadly weapon by a warrior.
A sword has no mind of its own and cannot make decisions. When the warrior is resting, the sword lies inanimate beside him on the grass. A sword only becomes a weapon when taken up in human hands.
Indeed the original definition of a weapon is an object that only comes to life when operated by a human.
Other things like landmines and carefully covered holes are really traps set by humans that tend to function indiscriminately.
So, traditionally, a weapon has no life of its own and operates only under human control. If a sword suddenly leapt into life and began fighting on its own, it would no longer be a weapon but a non-human combatant.
Here is the first humanitarian challenge from AI and deep learning machines – the challenge of autonomy.
If an AI weapon system is launched onto a battlefield to loiter, patrol or seek out the enemy and is learning as it goes (even if it has been programmed within certain parameters by a human) then when does it stop being a weapon and become a non-human combatant with autonomy in its “critical functions” and making its own targeting and activation decisions?
The second challenge is one of humanitarian judgement. How can we be sure that a process of machine learning will stay true to humanitarian principles of restraint, distinction, precaution and proportionality, even if they have initially been programmed in as defaults?
Can we rely on an AI system to become more humane or less humane as its system “learns” from the environment around it? Is machine learning predictable along a given pathway, or not?
The third humanitarian problem is one of speed - the stunning difference between human speed and machine speed. Even if a human were still “controlling” an AI learning system on the battlefield, could they keep pace with the learning and decision-making speed of the machine?
In a world in which a simple text sent from Kuala Lumpur to Jakarta arrives almost immediately you send it, what chance is there that we humans will think as fast as our machines over a complex battlefield to control them once they are in mid-flow?
Problems of autonomy, judgement and speed are significant in what we can expect to be complex AI targeting systems using big data or agile robotic weapons operating at the physical frontline like drones of many kinds.
These three challenges also affect more general cyber warfare which seeks out enemy computer systems and inserts malware to pause, repurpose or destroy their core functions.
But cyber warfare also presents three additional humanitarian challenges.
The fundamental computer dependence of so many essential services today - like health, water, energy, finance, education and communications - makes the civilian population deeply vulnerable to precisely targeted cyber attacks.
Cyber capability, therefore, gives rise to a very broad attack surface in war today and this large surface is extremely vulnerable to a single pin-point attack which could “switch off” a whole society in seconds.
This capability creates the grave humanitarian risk of maximum effect from minimal strike.
If cyber systems offer a broader attack surface they also potentially offer a deeper and more personalized attack surface by gathering highly individual data or profiling from big data to mount major surveillance and response operations at the individual level.
Millions of individual people could be monitored simultaneously by large computer systems during armed conflict in which AI could be used for machine-based decision-making about, for example, who should be detained, conscripted into the armed forces or deported.
These humanitarian risks are increased by another feature of cyber attacks – anonymity and problems of attribution.
It is often difficult to know who has made the attack and, even if you do know, it can be unwise to show knowledge of the attacker for fear of revealing your own virtual position in a conflict.
The current ease of anonymity and disguise in cyber warfare defuses responsibility and avoids formal systems of accountability, deterrence and shaming which usually support humanitarian norms.
In the next article, I will suggest that law and “human control” are the best guides to responsible use of AI and cyber, and to express some optimism around AI and cyber to qualify the gloom in current predictions.
------------------------------------------------------------------------------------------------------------------------------------
Dr Hugo Slim, Head of Policy and Humanitarian Diplomacy for the International Committee of the Red Cross (ICRC), a humanitarian organization that has been helping people around the world affected by armed conflict and other violence for over 150 years.
The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the position of Astro AWANI.
Dr Hugo Slim
Tue Nov 12 2019
Our new cyber-humanity is set to dictate our next era as a species as we increasingly live in human-machine interactions in all parts of our lives. -Filepix
Piala FA Inggeris: Status Erling Haaland masih tanda tanya
Pep Guardiola mendedahkan bahawa pemain berusia 23 tahun itu meminta untuk dikeluarkan tanpa memberi alasan yang kukuh.
Anwar hadir selesai kemelut MPN Sabah
Beliau mempengerusikan perjumpaan tersebut yang dipercayai bagi menyelesaikan kemelut politik PKR di negeri ini.
Piala Asia B-23: Vietnam intai peluang benam Malaysia, layak ke suku akhir
Vietnam mencari peluang untuk memastikan slot ke suku akhir Piala Asia B23 apabila berdepan Malaysia malam ini.
AWANI Pagi: Berita tumpuan & menarik di astroawani.com [20 April 2024]
AWANI Pagi bersama Geegee Ahmad & Azib Zikry membincangkan keperluan pengenalan Skim Kemalangan Bukan Bencana Kerja (SKBBK) di mana Presiden Kongres Kesatuan Sekerja Malaysia (MTUC), Mohd Effendy Abdul Ghani akan menekankan bahawa langkah proaktif itu adalah untuk melindungi hak-hak pekerja khususnya apabila melibatkan kemalangan jalan raya di luar waktu bekerja.
Lelaki bertindak bakar diri di luar mahkamah New York
Max Azzarello yang berasal dari St. Augustine, Florida, dilaporkan telah diselamatkan dan berada dalam keadaan kritikal.
[TERKINI] Ketua Polis Negara paling lama meninggal dunia
Bekas Ketua Polis Negara, Tun Mohamed Hanif Omar, meninggal dunia pada usia 85 tahun di kediamannya di Shah Alam pada 2.15 pagi Sabtu.
Pasangan suami isteri tidak sangka dua kali tolak tawaran, masih ada rezeki tunai haji tahun ini
Selepas surat tawaran berada di dalam tangan, dia dan isteri giat membuat persiapan demi memastikan mereka memperoleh haji yang mabrur.
Lelaki bertindak bakar diri di luar Mahkamah New York
Seorang lelaki bertindak membakar dirinya di luar mahkamah New York di mana perbicaraan bagi kes bekas presiden Amerika syarikat Donald Trump sedang berjalan
Perlu isyhtihar jika bawa lebih S$20,000
: Semua pengembara yang memasuki atau meninggalkan Singapura membawa mata wang fizikal dan instrumen boleh niaga (CBNI) dengan nilai lebih 20 Ribu Dolar Singapura atau bersamaan dalam mata wang asing.
Pengisytiharan CBNI ini supaya membantu usaha Singapura membanteras aktiviti jenayah seperti pengubahan wang haram dan Keperluan sedemikian adalah kebiasaan di banyak negara.
Pengisytiharan CBNI ini supaya membantu usaha Singapura membanteras aktiviti jenayah seperti pengubahan wang haram dan Keperluan sedemikian adalah kebiasaan di banyak negara.
G7 tekad, untuk redakan perang di Asia Barat, Ukraine-Rusia
Isu peperangan di Ukraine dan Asia Barat mendominasi mesyuarat Menteri Luar Kumpulan G7,terdiri daripada Amerika Syarikat, Itali, Kanada, Perancis, Jerman, Jepun dan Britain.
Sidang kemuncak menteri luar itu berakhir sejurus selepas serangan Israel ke atas Iran dikatakan sebagai tindakan balas terhadap serangan dron dan peluru berpandu Iran baru-baru ini ke atas Israel.
Sidang kemuncak menteri luar itu berakhir sejurus selepas serangan Israel ke atas Iran dikatakan sebagai tindakan balas terhadap serangan dron dan peluru berpandu Iran baru-baru ini ke atas Israel.
Kerajaan lihat keperluan rangka dasar, pinda perundangan mesra pelabur dan AI - PM Anwar
Bagaimanapun, Perdana Menteri Datuk Seri Anwar Ibrahim berkata aspek keselamatan data perlu diberikan keutamaan.
Program perintis AI Sandbox sasar 900 pemula niaga,13,000 bakat baharu AI menjelang 2026
Inisiatif ini adalah untuk mendorong Malaysia ke arah masa depan yang dipacu oleh inovasi, produktiviti dan daya saing.
Google Vids diperkenal untuk mudahkan penghasilan pembentangan video, dikuasakan AI
GOOGLE semalam memperkenalkan produk baharu di bawah Google Workspace, dinamakan sebagai Google Vids.
Guna AI perkasa tugas kawal sempadan negara - Raja Muda Perlis
Langkah itu dilaksanakan negara maju dalam usaha memperkasa penugasan mengawal sempadan negara ini supaya tidak mudah dibolosi.
Fakulti AI UTM dijangka dirasmikan 3 Mei depan
Penubuhan Fakulti Kecerdasan Buatan (AI) di Universiti Teknologi Malaysia (UTM) dijangka dirasmikan pada 3 Mei depan.
TikTok akan lancar pusat pilihan raya dalam talian untuk tangani maklumat salah
Impak media sosial ke atas pilihan raya adalah dibimbangi, dengan AI yang mampu membawa gelombang baharu maklumat salah dan palsu.
Apple mungkin tawarkan stor aplikasi AI
Siri juga dikatakan akan diintegrasikan dengan AI, iPhone boleh memproses AI terus dari dalam peranti.
AS bawa resolusi AI ke PBB
AS bersama 40 rakannya dari negara lain berkata AI mempunyai potensi besar untuk membentuk ekonomi, masyarakat dan dunia menjadi lebih baik.
KPT kerjasama dengan UTM bangunkan fakulti AI pertama - Zambry
Kementerian Pendidikan Tinggi dengan kerjasama Universiti Teknologi Malaysia sedang membangunkan Fakulti Kecerdasan Buatan (AI) pertama di Malaysia.
Penggunaan teknologi AI, payung Rahmah tumpuan Dewan Rakyat hari ini
Isu berkaitan kawalan dan pemantauan penggunaan media sosial dan teknologi AI serta inisiatif payung Rahmah menjadi antara tumpuan.