Cyber Security Weekly Podcast
Episode 386 - AI and the Law
Mr Yeong Zee Kin holds a Master of Laws from Queen Mary University of London and completed his undergraduate law degree at the National University of Singapore. His experience as a Technology, Media and Telecommunications lawyer spans both the private and public sectors. He has spoken and published in areas relating to electronic evidence and intellectual property, as well as legal issues relating to Blockchain and AI deployment.
Zee Kin is an internationally recognized expert on AI ethics. He spearheaded the development of Singapore’s Model AI Governance Framework, which won the UNITU WSIS Prize in 2019. He is currently a member of the OECD Network of Experts on AI (ONE AI). In 2019, he was a member of the AI Group of Experts at the OECD (AIGO), which developed the OECD Principles on AI. These principles have been endorsed by the G20 in 2019. He was also an observer participant at the European Commission’s High-Level Expert Group on AI, which fulfilled its mandate in June 2020
Zee Kin is also a well-regarded expert on data privacy issues. He has contributed to publications on legal issues relating to data privacy and has spoken at many well-recognised international and domestic platforms on this topic.
--
In this interview, Zee Kin shares his insights on the legal challenges in the Era of Advanced AI
Zee Kin highlighted that with the latest AI innovations, the responsibility and legal issues remain largely consistent, but the tools and technology introduce different challenges.
For instance, he shared that such concerns around content, child protection, intermediary behavior, data security, data protection, and cybercrime remain, while challenges such as detection of fake content has intensified due to increased tool accessibility and the scalability of threats.
Referring to the "Getty vs. Stability AI" case, he shared that the interesting question is the use of copyrighted data to train AI models – which is not new, and the key is to establish a proper legal basis for using such data. Data lineage and the provenance of data have always been important in legal contexts.
He also noted that these concerns have also surfaced during the recent governmental responses around the world to the latest AI innovations.
Zee Kin also highlighted the challenges with defining terms such as "fairness," "transparency," and "repeatability" – varies by context, where expectations and priorities for AI differ based on its use, such as safety and predictability in medicine, and bias and fairness in personal data applications.
Repeatability poses an additional challenge in Generative AI because every iteration of an image or summary will vary (**owing to Generative AI's statistical predictive nature).
Zee Kin also shares his views of AI's impact on job security, nothing that there will be emerging opportunities for lawyers to use AI tools for efficiency and error reduction.
Recorded at TechLaw Fest 2023, 21st Sept 2023, 3.30pm, Marina Bay Sands, Singapore.
#mysecuritytv #cybersecurity #ai #law #ailawyer