Teaching Machines to Learn Better: Mitigating Discrimination, Fostering Principled Governance and Communicating Expectations of Emerging AI
|Teaching Machines to Learn Better: Mitigating Discrimination, Fostering Principled Governance and Communicating Expectations of Emerging AI|
|2017 theme||Training & Best Practices|
This conversation will seek to expand dialogue towards a holistic framing of our expectations of artificial intelligence It will try to bridge three issues I see as particularly relevant: 1) Much of the data that comprises training data for machines learning can hold inherent biases; 2) only ad-hoc governance principles have emerged to form the basis of a social contract between humans and machines and 3) many individuals who encounter smart systems are not fully aware of the ways their data and clicks are creating nominal path-dependency. I believe there is a compelling interest to explore the moral and ethical implications of emerging technology and discuss with thought leaders and digital advocates, while deconstructing the relationship between private, proprietary algorithms and users. If ubiquitous, for-profit and opaque technology will increasingly surround us, what are the fundamental elements of our ‘social contract’ with technology in the age of surveillance capitalism. Over the course of the hour, I will foster a conversation to better understand how we think about an automated future. The starting point for this conversation will be six intrusive forces that exploit invasive data collection that I have coined ‘MIMICS’: Manipulation (of our feeds and search results), Indexing (of our clicks, pageviews and social graphs), Monitoring (our content consumption patterns to shape future results), Interception (of data via upstream surveillance), Censorship (through arbitrarily enforced content moderation policies) and ‘Siloing’ (which forces users to keep their data within the walled gardens of a single platform).
|Target Groups||policy makers, technologists, researchers, academics|