Difference between revisions of "Teaching Machines to Learn Better: Mitigating Discrimination, Fostering Principled Governance and Communicating Expectations of Emerging AI"

From IFF Wiki
Jump to: navigation, search
(A conversation on the ethical implication of proprietary AI)
 
 
Line 6: Line 6:
 
}}
 
}}
 
{{Theme
 
{{Theme
|2017 Theme=Training & Best Practices
+
|2017 Theme=Tools & Technology
 
}}
 
}}
This conversation will seek to expand dialogue towards a holistic framing of our expectations of artificial intelligence It will try to bridge three issues I see as particularly relevant: 1) Much of the data that comprises training data for machines learning can hold inherent biases; 2) only ad-hoc governance principles have emerged to form the basis of a social contract between humans and machines and 3) many individuals who encounter smart systems are not fully aware of the ways their data and clicks are creating nominal path-dependency. I believe there is a compelling interest to explore the moral and ethical implications of emerging technology and discuss with thought leaders and digital advocates, while deconstructing the relationship  between private, proprietary algorithms and users.  If ubiquitous, for-profit and opaque technology will increasingly surround us, what are the fundamental elements of our ‘social contract’ with technology in the age of surveillance capitalism. Over the course of the hour, I will foster a conversation to better understand how we think about an automated future.  
+
This conversation will seek to expand dialogue towards a holistic framing of our expectations of artificial intelligence. It will try to bridge three issues I see as particularly relevant: 1) Much of the data that comprises training data for machines learning can hold inherent biases; 2) only ad-hoc governance principles have emerged to form the basis of a social contract between humans and machines and 3) many individuals who encounter smart systems are not fully aware of the ways their data and clicks are creating nominal path-dependency.  
 +
 
 +
I believe there is a compelling interest to explore the moral and ethical implications of emerging technology and discuss with thought leaders and digital advocates, while deconstructing the relationship  between private, proprietary algorithms and users.  If ubiquitous, for-profit and opaque technology will increasingly surround us, what are the fundamental elements of our ‘social contract’ with technology in the age of surveillance capitalism. Over the course of the hour, I will foster a conversation to better understand how we think about an automated future.  
 +
 
 
The starting point for this conversation will be six intrusive forces that exploit invasive data collection that I have coined ‘MIMICS’: Manipulation (of our feeds and search results), Indexing (of our clicks, pageviews and social graphs), Monitoring (our content consumption patterns to shape future results), Interception (of data via upstream surveillance), Censorship (through arbitrarily enforced content moderation policies) and ‘Siloing’ (which forces users to keep their data within the walled gardens of a single platform).  
 
The starting point for this conversation will be six intrusive forces that exploit invasive data collection that I have coined ‘MIMICS’: Manipulation (of our feeds and search results), Indexing (of our clicks, pageviews and social graphs), Monitoring (our content consumption patterns to shape future results), Interception (of data via upstream surveillance), Censorship (through arbitrarily enforced content moderation policies) and ‘Siloing’ (which forces users to keep their data within the walled gardens of a single platform).  
  
Line 20: Line 23:
 
}}
 
}}
 
{{Notes}}
 
{{Notes}}
 +
[[Category:Tools & Technology (2017)]] [[Category: Novice]] [[Category: English (2017)]]

Latest revision as of 18:06, 14 February 2017

Teaching Machines to Learn Better: Mitigating Discrimination, Fostering Principled Governance and Communicating Expectations of Emerging AI
Presenter(s) Matthew Stender
Title(s) Tech Ethicist
Organization(s)
Project(s)
Country(ies) USA, Germany
Social media @stenderworld
2017 theme Tools & Technology

This conversation will seek to expand dialogue towards a holistic framing of our expectations of artificial intelligence. It will try to bridge three issues I see as particularly relevant: 1) Much of the data that comprises training data for machines learning can hold inherent biases; 2) only ad-hoc governance principles have emerged to form the basis of a social contract between humans and machines and 3) many individuals who encounter smart systems are not fully aware of the ways their data and clicks are creating nominal path-dependency.

I believe there is a compelling interest to explore the moral and ethical implications of emerging technology and discuss with thought leaders and digital advocates, while deconstructing the relationship between private, proprietary algorithms and users. If ubiquitous, for-profit and opaque technology will increasingly surround us, what are the fundamental elements of our ‘social contract’ with technology in the age of surveillance capitalism. Over the course of the hour, I will foster a conversation to better understand how we think about an automated future.

The starting point for this conversation will be six intrusive forces that exploit invasive data collection that I have coined ‘MIMICS’: Manipulation (of our feeds and search results), Indexing (of our clicks, pageviews and social graphs), Monitoring (our content consumption patterns to shape future results), Interception (of data via upstream surveillance), Censorship (through arbitrarily enforced content moderation policies) and ‘Siloing’ (which forces users to keep their data within the walled gardens of a single platform).


Format Conversation
Target Groups policy makers, technologists, researchers, academics
Length 1 hour
Skill Level novice
Language English


Session Outputs

Next Steps

Additional Notes

Relevant Resources

Contributors