<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="http://webfeeds.brookings.edu/feedblitz_rss.xslt"?><rss xmlns:content="http://purl.org/rss/1.0/modules/content/"  xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"  xmlns:a10="http://www.w3.org/2005/Atom" version="2.0" xmlns:feedburner="http://rssnamespace.org/feedburner/ext/1.0"><channel xmlns:dc="http://purl.org/dc/elements/1.1/"><title>Brookings Series - The Future of the Constitution</title><link>http://www.brookings.edu/about/programs/governance/future-of-the-constitution?rssid=Future+of+the+Constitution</link><description>Brookings Series - The Future of the Constitution</description><language>en</language><lastBuildDate>Tue, 13 Dec 2011 00:00:00 -0500</lastBuildDate><a10:id>http://www.brookings.edu/series.aspx?feed=Future+of+the+Constitution</a10:id><a10:link rel="self" type="application/rss+xml" href="http://www.brookings.edu/series.aspx?feed=Future+of+the+Constitution" /><pubDate>Sat, 23 Jul 2016 02:10:27 -0400</pubDate>
<itunes:explicit>no</itunes:explicit>
<itunes:summary>Brookings Series Feed</itunes:summary>
<itunes:subtitle>Brookings Series Feed</itunes:subtitle>
<item>
<feedburner:origLink>http://www.brookings.edu/research/books/2011/constitution30?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{C3652274-B78C-4FCB-AC30-0DE4243083F6}</guid><link>http://webfeeds.brookings.edu/~/65487885/0/brookingsrss/series/futureoftheconstitution~Constitution-Freedom-and-Technological-Change</link><title>Constitution 3.0 : Freedom and Technological Change </title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/press/books/2011/constitution30/constitution30.jpg" alt="" border="0" /><br /><div>
		Brookings Institution Press 2011 271pp.
	</div><br/><div>
		Technological changes are posing stark challenges to America’s core values. Basic constitutional principles find themselves under stress from stunning advances that were unimaginable even a few decades ago, much less during the Founders’ era. Policymakers and scholars must begin thinking about how constitutional principles are being tested by technological change and how to ensure that those principles can be preserved without hindering technological progress.<br><br>
<em>Constitution 3.0</em>, a product of the Brookings Institution’s landmark Future of the Constitution program, presents an invaluable roadmap for responding to the challenge of adapting our constitutional values to future technological developments. Renowned legal analysts Jeffrey Rosen and Benjamin Wittes asked a diverse group of leading scholars to imagine plausible technological developments in or near the year 2025 that would stress current constitutional law and to propose possible solutions. Some tackled issues certain to arise in the very near future, while others addressed more speculative or hypothetical questions. Some favor judicial responses to the scenarios they pose; others prefer legislative or regulatory responses.<br><br>
Here is a sampling of the questions raised and answered in <em>Constitution 3.0</em>:<br><br>
• How do we ensure our security in the face of the biotechnology revolution and our overwhelming dependence on internationally networked computers?<br><br>
• How do we protect free speech and privacy in a world in which Google and Facebook have more control than any government or judge?<br><br>
• How will advances in brain scan technologies affect the constitutional right against self-incrimination?<br><br>
• Are Fourth Amendment protections against unreasonable search and seizure obsolete in an age of ubiquitous video and unlimited data storage and processing?<br><br>
• How vigorously should society and the law respect the autonomy of individuals to manipulate their genes and design their own babies?<br><br>
Individually and collectively, the deeply thoughtful analyses in <em>Constitution 3.0</em> present an innovative roadmap for adapting our core legal values, in the interest of keeping the Constitution relevant through the 21st century.<br><br>
<strong>Contributors include:</strong> Jamie Boyle, Erich Cohen, Robert George, Jack Goldsmith, Orin Kerr, Lawrence Lessig, Stephen Morse, John Robertson, Jeffrey Rosen, Christopher Slobogin, O. Carter Snead, Benjamin Wittes, Tim Wu, and Jonathan Zittrain.
	</div><div>
		<h4>
			ABOUT THE EDITORS
		</h4><h5>
			<a href="http://www.brookings.edu/experts/rosenj.aspx">Jeffrey Rosen</a>
		</h5><div>
			Jeffrey Rosen is a non-resident senior fellow in Governance Studies at the Brookings Institution and a professor of law at the George Washington University in Washington, D.C. He also serves as legal editor for the New Republic and is the author of several books, including The Supreme Court: The Personalities and Rivalries that Defined America (Times Books, 2007) and The Naked Crowd: Reclaiming Security and Freedom in an Anxious Age (Random House, 2005).
		</div><h5>
			<a href="http://www.brookings.edu/experts/wittesb.aspx">Benjamin Wittes</a>
		</h5><div>
			Benjamin Wittes is a senior fellow in Governance Studies at the Brookings Institution and served nine years as an editorial writer with the Washington Post. His previous books include Detention and Denial: The Case for Candor after Guantánamo (Brookings, 2010) and Law and the Long War: The Future of Justice in the Age of Terror (Penguin, 2008), and he is cofounder of the Lawfare blog.
		</div>
	</div><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/press/books/2011/constitution30/constitution30_toc.pdf">Table of Contents</a></li><li><a href="http://www.brookings.edu/~/media/press/books/2011/constitution30/constitution30_chapter.pdf">Sample Chapter</a></li>
	</ul><span>Ordering Information:</span><ul>
		<li>{CD2E3D28-0096-4D03-B2DE-6567EB62AD1E}, 978-0-8157-2212-0, $29.95 <a href="http://jhupbooks.press.jhu.edu/ecom/MasterServlet/AddToCartFromExternalHandler?item=9780815722120&amp;domain=brookings.edu">Add to Cart</a></li><li>{9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 9780815724506, $22.95 <a href="http://jhupbooks.press.jhu.edu/ecom/MasterServlet/AddToCartFromExternalHandler?item=9780815724506&amp;domain=brookings.edu">Add to Cart</a></li>
	</ul>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487885/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fpress%2fbooks%2f2011%2fconstitution30%2fconstitution30.jpg"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Tue, 13 Dec 2011 00:00:00 -0500</pubDate><dc:creator> Jeffrey Rosen and Benjamin Wittes, eds.</dc:creator>
<itunes:summary> 
Brookings Institution Press 2011 271pp.
 Technological changes are posing stark challenges to America&#x2019;s core values. Basic constitutional principles find themselves under stress from stunning advances that were unimaginable even a few decades ago, much less during the Founders&#x2019; era. Policymakers and scholars must begin thinking about how constitutional principles are being tested by technological change and how to ensure that those principles can be preserved without hindering technological progress.
Constitution 3.0, a product of the Brookings Institution&#x2019;s landmark Future of the Constitution program, presents an invaluable roadmap for responding to the challenge of adapting our constitutional values to future technological developments. Renowned legal analysts Jeffrey Rosen and Benjamin Wittes asked a diverse group of leading scholars to imagine plausible technological developments in or near the year 2025 that would stress current constitutional law and to propose possible solutions. Some tackled issues certain to arise in the very near future, while others addressed more speculative or hypothetical questions. Some favor judicial responses to the scenarios they pose; others prefer legislative or regulatory responses.
Here is a sampling of the questions raised and answered in Constitution 3.0:
&#x2022; How do we ensure our security in the face of the biotechnology revolution and our overwhelming dependence on internationally networked computers?
&#x2022; How do we protect free speech and privacy in a world in which Google and Facebook have more control than any government or judge?
&#x2022; How will advances in brain scan technologies affect the constitutional right against self-incrimination?
&#x2022; Are Fourth Amendment protections against unreasonable search and seizure obsolete in an age of ubiquitous video and unlimited data storage and processing?
&#x2022; How vigorously should society and the law respect the autonomy of individuals to manipulate their genes and design their own babies?
Individually and collectively, the deeply thoughtful analyses in Constitution 3.0 present an innovative roadmap for adapting our core legal values, in the interest of keeping the Constitution relevant through the 21st century.
Contributors include: Jamie Boyle, Erich Cohen, Robert George, Jack Goldsmith, Orin Kerr, Lawrence Lessig, Stephen Morse, John Robertson, Jeffrey Rosen, Christopher Slobogin, O. Carter Snead, Benjamin Wittes, Tim Wu, and Jonathan Zittrain. 
ABOUT THE EDITORS Jeffrey Rosen Jeffrey Rosen is a non-resident senior fellow in Governance Studies at the Brookings Institution and a professor of law at the George Washington University in Washington, D.C. He also serves as legal editor for the New Republic and is the author of several books, including The Supreme Court: The Personalities and Rivalries that Defined America (Times Books, 2007) and The Naked Crowd: Reclaiming Security and Freedom in an Anxious Age (Random House, 2005). Benjamin Wittes Benjamin Wittes is a senior fellow in Governance Studies at the Brookings Institution and served nine years as an editorial writer with the Washington Post. His previous books include Detention and Denial: The Case for Candor after Guant&#xE1;namo (Brookings, 2010) and Law and the Long War: The Future of Justice in the Age of Terror (Penguin, 2008), and he is cofounder of the Lawfare blog. 
Downloads
 - Table of Contents- Sample Chapter 
Ordering Information:
- {CD2E3D28-0096-4D03-B2DE-6567EB62AD1E}, 978-0-8157-2212-0, $29.95 Add to Cart- {9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 9780815724506, $22.95 Add to Cart 
</itunes:summary>
<itunes:subtitle> 
Brookings Institution Press 2011 271pp.
 Technological changes are posing stark challenges to America&#x2019;s core values. Basic constitutional principles find themselves under stress from stunning advances that were unimaginable even a few ... </itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/press/books/2011/constitution30/constitution30.jpg" alt="" border="0" />
<br><div>
		Brookings Institution Press 2011 271pp.
	</div>
<br><div>
		Technological changes are posing stark challenges to America’s core values. Basic constitutional principles find themselves under stress from stunning advances that were unimaginable even a few decades ago, much less during the Founders’ era. Policymakers and scholars must begin thinking about how constitutional principles are being tested by technological change and how to ensure that those principles can be preserved without hindering technological progress.
<br>
<br>
<em>Constitution 3.0</em>, a product of the Brookings Institution’s landmark Future of the Constitution program, presents an invaluable roadmap for responding to the challenge of adapting our constitutional values to future technological developments. Renowned legal analysts Jeffrey Rosen and Benjamin Wittes asked a diverse group of leading scholars to imagine plausible technological developments in or near the year 2025 that would stress current constitutional law and to propose possible solutions. Some tackled issues certain to arise in the very near future, while others addressed more speculative or hypothetical questions. Some favor judicial responses to the scenarios they pose; others prefer legislative or regulatory responses.
<br>
<br>
Here is a sampling of the questions raised and answered in <em>Constitution 3.0</em>:
<br>
<br>
• How do we ensure our security in the face of the biotechnology revolution and our overwhelming dependence on internationally networked computers?
<br>
<br>
• How do we protect free speech and privacy in a world in which Google and Facebook have more control than any government or judge?
<br>
<br>
• How will advances in brain scan technologies affect the constitutional right against self-incrimination?
<br>
<br>
• Are Fourth Amendment protections against unreasonable search and seizure obsolete in an age of ubiquitous video and unlimited data storage and processing?
<br>
<br>
• How vigorously should society and the law respect the autonomy of individuals to manipulate their genes and design their own babies?
<br>
<br>
Individually and collectively, the deeply thoughtful analyses in <em>Constitution 3.0</em> present an innovative roadmap for adapting our core legal values, in the interest of keeping the Constitution relevant through the 21st century.
<br>
<br>
<strong>Contributors include:</strong> Jamie Boyle, Erich Cohen, Robert George, Jack Goldsmith, Orin Kerr, Lawrence Lessig, Stephen Morse, John Robertson, Jeffrey Rosen, Christopher Slobogin, O. Carter Snead, Benjamin Wittes, Tim Wu, and Jonathan Zittrain.
	</div><div>
		<h4>
			ABOUT THE EDITORS
		</h4><h5>
			<a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/experts/rosenj.aspx">Jeffrey Rosen</a>
		</h5><div>
			Jeffrey Rosen is a non-resident senior fellow in Governance Studies at the Brookings Institution and a professor of law at the George Washington University in Washington, D.C. He also serves as legal editor for the New Republic and is the author of several books, including The Supreme Court: The Personalities and Rivalries that Defined America (Times Books, 2007) and The Naked Crowd: Reclaiming Security and Freedom in an Anxious Age (Random House, 2005).
		</div><h5>
			<a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/experts/wittesb.aspx">Benjamin Wittes</a>
		</h5><div>
			Benjamin Wittes is a senior fellow in Governance Studies at the Brookings Institution and served nine years as an editorial writer with the Washington Post. His previous books include Detention and Denial: The Case for Candor after Guantánamo (Brookings, 2010) and Law and the Long War: The Future of Justice in the Age of Terror (Penguin, 2008), and he is cofounder of the Lawfare blog.
		</div>
	</div><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/press/books/2011/constitution30/constitution30_toc.pdf">Table of Contents</a></li><li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/press/books/2011/constitution30/constitution30_chapter.pdf">Sample Chapter</a></li>
	</ul><span>Ordering Information:</span><ul>
		<li>{CD2E3D28-0096-4D03-B2DE-6567EB62AD1E}, 978-0-8157-2212-0, $29.95 <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~jhupbooks.press.jhu.edu/ecom/MasterServlet/AddToCartFromExternalHandler?item=9780815722120&amp;domain=brookings.edu">Add to Cart</a></li><li>{9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 9780815724506, $22.95 <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~jhupbooks.press.jhu.edu/ecom/MasterServlet/AddToCartFromExternalHandler?item=9780815724506&amp;domain=brookings.edu">Add to Cart</a></li>
	</ul>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487885/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487885/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fpress%2fbooks%2f2011%2fconstitution30%2fconstitution30.jpg"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487885/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2011/07/05-genetics-cohen-george?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{13BA8A65-D969-4FA5-AAAF-3E4ACF789E14}</guid><link>http://webfeeds.brookings.edu/~/65487887/0/brookingsrss/series/futureoftheconstitution~The-Problems-and-Possibilities-of-Modern-Genetics-A-Paradigm-for-Social-Ethical-and-Political-Analysis</link><title>The Problems and Possibilities of Modern Genetics: A Paradigm for Social, Ethical, and Political Analysis</title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/g/ga%20ge/genetics_scientists001_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>Introduction</b></p><p><p class="bodytextfirstpar">Imagine a future in which any person, man or woman, could engineer a child as a genetic replica of himself or herself. Or a future in which a child could be the biological fusion of the genes of two men or two women. Or a future in which every individual could know, with reasonable certainty, which diseases they would suffer in the months, years, or even decades ahead. Would this new genetic age constitute a better world, or a deformed one? The triumph of modern civilization, or the realization of modernity&rsquo;s dark side?</p>
<p>With a subject as large and as profound as modern genetics, we face a major question from the start about how to approach it. We could take a scientific approach, examining the use of information technology in genomic research, or the latest advances in identifying certain genetic mutations, or the use of genetic knowledge in the development of medical technologies. We can take a social scientific approach, seeking to understand the economic incentives that drive the genetic research agenda, or surveying public attitudes toward genetic testing, or documenting the use of reproductive genetic technology according to socioeconomic class. We could take a public safety approach, reviewing different genetic tests and therapies for safety and efficacy with a view to identifying regulatory procedures to protect and inform vulnerable patients undergoing gene therapy trials. As we think about the genetic future, all of these approaches are valuable. Yet there are even more fundamental questions that need to be addressed. These concern the human meaning of our growing powers over the human genome.</p>
<p>The reason modern genetics worries, excites, and fascinates the imagination is that we sense that this area of science will affect or even transform the core experiences of being human&mdash;such as how we have children, how we experience &nbsp;freedom, and how we face sickness and death. Like no other area of modern science and technology, genetics inspires both dreams and nightmares about the human future with equal passion: the dream of perfect babies, the nightmare of genetic tyranny. But the dream and the nightmare are not the best guides to understanding how genetics will challenge our moral self-understanding and our social fabric. We need a more sober approach&mdash;one that confronts the real ethical and social dilemmas that we face, without constructing such a monstrous image of the future that our gravest warnings are ignored like the bioethics boy who cried wolf.&nbsp;&nbsp; </p>
<p>What is the role of constitutional adjudication in confronting these dilemmas? In a word, that role should be limited. To be sure, American constitutional principles and institutions provide the frameworks and forums for democratic deliberation regarding bioethical and other important moral questions, but in most cases it will not be possible to resolve them by reference to norms that can fairly be said to be discoverable in the text, logic, structure, or historical understanding of the Constitution. Reasonable people of goodwill who disagree on these matters may be equally committed to constitutional principles of due process, equal protection, and the like; and it would be deeply wrong&mdash;profoundly anti-constitutional&mdash;for people on either side of a disputed question left unsettled by the Constitution to manipulate constitutional concepts or language in the hope of&nbsp; inducing judges, under the guise of interpreting the Constitution, to hand them victories that they have not been able to achieve in the forums of democratic deliberation established by the Constitution itself. It would be a tragedy for our polity if bioethics became the next domain in which over-reaching judges, charged with protecting the rule of law, undermine the constitutional division of powers by usurping the authority vested under the Constitution in the people acting on their own initiative (as is authorized under the laws of some states) or through their elected representatives.&nbsp; </p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2011/7/05-genetics-cohen-george/0705_genetics_cohen_george.pdf">Download the Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Eric Cohen</li><li>Robert P. George</li>
		</ul>
	</div><div>
		Image Source: Adam Gault
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487887/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fg%2fga%2520ge%2fgenetics_scientists001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Tue, 05 Jul 2011 14:10:00 -0400</pubDate><dc:creator>Eric Cohen and Robert P. George</dc:creator>
<itunes:summary> 
Introduction
Imagine a future in which any person, man or woman, could engineer a child as a genetic replica of himself or herself. Or a future in which a child could be the biological fusion of the genes of two men or two women. Or a future in which every individual could know, with reasonable certainty, which diseases they would suffer in the months, years, or even decades ahead. Would this new genetic age constitute a better world, or a deformed one? The triumph of modern civilization, or the realization of modernity's dark side?
With a subject as large and as profound as modern genetics, we face a major question from the start about how to approach it. We could take a scientific approach, examining the use of information technology in genomic research, or the latest advances in identifying certain genetic mutations, or the use of genetic knowledge in the development of medical technologies. We can take a social scientific approach, seeking to understand the economic incentives that drive the genetic research agenda, or surveying public attitudes toward genetic testing, or documenting the use of reproductive genetic technology according to socioeconomic class. We could take a public safety approach, reviewing different genetic tests and therapies for safety and efficacy with a view to identifying regulatory procedures to protect and inform vulnerable patients undergoing gene therapy trials. As we think about the genetic future, all of these approaches are valuable. Yet there are even more fundamental questions that need to be addressed. These concern the human meaning of our growing powers over the human genome.
The reason modern genetics worries, excites, and fascinates the imagination is that we sense that this area of science will affect or even transform the core experiences of being human&#x2014;such as how we have children, how we experience  freedom, and how we face sickness and death. Like no other area of modern science and technology, genetics inspires both dreams and nightmares about the human future with equal passion: the dream of perfect babies, the nightmare of genetic tyranny. But the dream and the nightmare are not the best guides to understanding how genetics will challenge our moral self-understanding and our social fabric. We need a more sober approach&#x2014;one that confronts the real ethical and social dilemmas that we face, without constructing such a monstrous image of the future that our gravest warnings are ignored like the bioethics boy who cried wolf.   
What is the role of constitutional adjudication in confronting these dilemmas? In a word, that role should be limited. To be sure, American constitutional principles and institutions provide the frameworks and forums for democratic deliberation regarding bioethical and other important moral questions, but in most cases it will not be possible to resolve them by reference to norms that can fairly be said to be discoverable in the text, logic, structure, or historical understanding of the Constitution. Reasonable people of goodwill who disagree on these matters may be equally committed to constitutional principles of due process, equal protection, and the like; and it would be deeply wrong&#x2014;profoundly anti-constitutional&#x2014;for people on either side of a disputed question left unsettled by the Constitution to manipulate constitutional concepts or language in the hope of  inducing judges, under the guise of interpreting the Constitution, to hand them victories that they have not been able to achieve in the forums of democratic deliberation established by the Constitution itself. It would be a tragedy for our polity if bioethics became the next domain in which over-reaching judges, charged with protecting the rule of law, undermine the constitutional division of powers by usurping the authority vested under the Constitution in the people acting on their own initiative (as is authorized under the laws of some states) or through ... </itunes:summary>
<itunes:subtitle>Introduction
Imagine a future in which any person, man or woman, could engineer a child as a genetic replica of himself or herself. Or a future in which a child could be the biological fusion of the genes of two men or two women.</itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/g/ga%20ge/genetics_scientists001_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>Introduction</b></p><p><p class="bodytextfirstpar">Imagine a future in which any person, man or woman, could engineer a child as a genetic replica of himself or herself. Or a future in which a child could be the biological fusion of the genes of two men or two women. Or a future in which every individual could know, with reasonable certainty, which diseases they would suffer in the months, years, or even decades ahead. Would this new genetic age constitute a better world, or a deformed one? The triumph of modern civilization, or the realization of modernity&rsquo;s dark side?</p>
<p>With a subject as large and as profound as modern genetics, we face a major question from the start about how to approach it. We could take a scientific approach, examining the use of information technology in genomic research, or the latest advances in identifying certain genetic mutations, or the use of genetic knowledge in the development of medical technologies. We can take a social scientific approach, seeking to understand the economic incentives that drive the genetic research agenda, or surveying public attitudes toward genetic testing, or documenting the use of reproductive genetic technology according to socioeconomic class. We could take a public safety approach, reviewing different genetic tests and therapies for safety and efficacy with a view to identifying regulatory procedures to protect and inform vulnerable patients undergoing gene therapy trials. As we think about the genetic future, all of these approaches are valuable. Yet there are even more fundamental questions that need to be addressed. These concern the human meaning of our growing powers over the human genome.</p>
<p>The reason modern genetics worries, excites, and fascinates the imagination is that we sense that this area of science will affect or even transform the core experiences of being human&mdash;such as how we have children, how we experience &nbsp;freedom, and how we face sickness and death. Like no other area of modern science and technology, genetics inspires both dreams and nightmares about the human future with equal passion: the dream of perfect babies, the nightmare of genetic tyranny. But the dream and the nightmare are not the best guides to understanding how genetics will challenge our moral self-understanding and our social fabric. We need a more sober approach&mdash;one that confronts the real ethical and social dilemmas that we face, without constructing such a monstrous image of the future that our gravest warnings are ignored like the bioethics boy who cried wolf.&nbsp;&nbsp; </p>
<p>What is the role of constitutional adjudication in confronting these dilemmas? In a word, that role should be limited. To be sure, American constitutional principles and institutions provide the frameworks and forums for democratic deliberation regarding bioethical and other important moral questions, but in most cases it will not be possible to resolve them by reference to norms that can fairly be said to be discoverable in the text, logic, structure, or historical understanding of the Constitution. Reasonable people of goodwill who disagree on these matters may be equally committed to constitutional principles of due process, equal protection, and the like; and it would be deeply wrong&mdash;profoundly anti-constitutional&mdash;for people on either side of a disputed question left unsettled by the Constitution to manipulate constitutional concepts or language in the hope of&nbsp; inducing judges, under the guise of interpreting the Constitution, to hand them victories that they have not been able to achieve in the forums of democratic deliberation established by the Constitution itself. It would be a tragedy for our polity if bioethics became the next domain in which over-reaching judges, charged with protecting the rule of law, undermine the constitutional division of powers by usurping the authority vested under the Constitution in the people acting on their own initiative (as is authorized under the laws of some states) or through their elected representatives.&nbsp; </p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2011/7/05-genetics-cohen-george/0705_genetics_cohen_george.pdf">Download the Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Eric Cohen</li><li>Robert P. George</li>
		</ul>
	</div><div>
		Image Source: Adam Gault
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487887/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487887/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fg%2fga%2520ge%2fgenetics_scientists001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487887/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2011/05/02-free-speech-rosen?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{CDCA0ACD-480C-4166-AADE-F41601F73364}</guid><link>http://webfeeds.brookings.edu/~/65487888/0/brookingsrss/series/futureoftheconstitution~Facebook-Google-and-the-Future-of-Privacy-and-Free-Speech</link><title>Facebook, Google, and the Future of Privacy and Free Speech </title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/s/sk%20so/social_connections001_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>Introduction</b></p><p><p>It was 2025 when Facebook decided to post live feeds from public and private surveillance cameras, so they could be searched online. The decision hardly came as a surprise. Ever since Facebook passed the 500 million-member mark in 2010, it found increasing consumer demand for applications that allowed users to access surveillance cameras with publicly accessible IP addresses. (Initially, live feeds to cameras on Mexican beaches were especially popular.) But in the mid-2020s, popular demand for live surveillance camera feeds were joined by demands from the U.S. government that an open circuit television network would be invaluable in tracking potential terrorists. As a result, Facebook decided to link the public and private camera networks, post them live online, and store the video feeds without restrictions on distributed servers in the digital cloud.</p>
    <p>Once the new open circuit system went live, anyone in the world could log onto the Internet, select a particular street view on Facebook maps and zoom in on a particular individual. Anyone could then back click on that individual to retrace her steps since she left the house in the morning or forward click on her to see where she was headed in the future. Using Facebook’s integrated face recognition app, users could click on a stranger walking down any street in the world, plug her image into the Facebook database to identify her by name, and then follow her movements from door-to-door. Since cameras were virtually ubiquitous in public and commercial spaces, the result was the possibility of ubiquitous identification and surveillance of all citizens virtually anywhere in the world—and by anyone. In an enthusiastic launch, Mark Zuckerberg dubbed the new 24/7 ubiquitous surveillance system “Open Planet.”</p>
    <p>Open Planet is not a technological fantasy. Most of the architecture for implementing it already exists, and it would be a simple enough task for Facebook or Google, if the companies chose, to get the system up and running: face recognition is already plausible, storage is increasing exponentially; and the only limitation is the coverage and scope of the existing cameras, which are growing by the day. Indeed, at a legal Futures Conference at Stanford in 2007, Andrew McLaughlin, then the head of public policy at Google, said he expected Google to get requests to put linked surveillance networks live and online within the decade. How, he, asked the audience of scholars and technologists, should Google respond? </p>
    <p>If “Open Planet” went live, would it violate the Constitution? The answer is that it might not under Supreme Court doctrine as it now exists—at least not if it were a purely-private affair, run by private companies alone and without government involvement. Both the First Amendment, which protects free speech, and the Fourth Amendment, which prohibits unreasonable searches and seizures, only restrict actions by the government. On the other hand, if the government directed Open Planet’s creation or used it to track citizens on government-owned, as well as private-sector, cameras, perhaps Facebook might be viewed as the equivalent of a state actor, and therefore restricted by the Constitution.</p>
    <p>At the time of the framing of the Constitution, a far less intrusive invasion of privacy – namely, the warrantless search of private homes and desk drawers for seditious papers – was considered the paradigmatic case of an unreasonable and unconstitutional invasion of privacy. The fact that 24/7 ubiquitous surveillance may not violate the Constitution today suggests the challenge of translating the framers’ values into a world in which Google and Facebook now have far more power over the privacy and free speech of most citizens than any King, president, or Supreme Court justice. In this essay, I will examine four different areas where the era of Facebook and Google will challenge our existing ideas about constitutional protections for free speech and privacy: ubiquitous surveillance with GPS devices and online surveillance cameras; airport body scanners; embarrassing Facebook photos and the problem of digital forgetting; and controversial YouTube videos. In each area, I will suggest, preserving constitutional values requires a different balance of legal and technological solutions, combined with political mobilization that leads to changes in social norms. </p>
    <p>Let’s start with Open Planet, and imagine sufficient government involvement to make the courts plausibly consider Facebook’s program the equivalent of state action. Imagine also that the Supreme Court in 2025 were unsettled by Open Planet and inclined to strike it down. A series of other doctrines might bar judicial intervention. The Court has come close to saying that we have no legitimate expectations of privacy in public places, at least when the surveillance technologies in question are in general public use by ordinary members of the public.<a href="#_ftn1" name="_ftnref1">[1]</a>  As mobile camera technology becomes ubiquitous, the Court might hold that the government is entitled to have access to the same linked camera system that ordinary members of the public have become accustomed to browsing. Moreover, the Court has said that we have no expectation of privacy in data that we voluntarily surrender to third parties.<a href="#_ftn2" name="_ftnref2">[2]</a> In cases where digital images are captured on cameras owned by third parties and stored in the digital cloud—that is, on distributed third party servers--we have less privacy than citizens took for granted at the time of the American founding. And although the founders expected a degree of anonymity in public, that expectation would be defeated by the possibility of 24/7 surveillance on Facebook. </p>
    <p>The doctrinal seeds of a judicial response to Open Planet, however, do exist. A Supreme Court inclined to strike down ubiquitous surveillance might draw on recent cases involving decisions by the police to place a GPS tracking device on the car of a suspect without a warrant, tracking his movements 24/7. The Supreme Court has not yet decided whether prolonged surveillance, in the form of “dragnet-type law enforcement practices” violates the Constitution.<a href="#_ftn3" name="_ftnref3">[3]</a> Three federal circuits have held that the use of a GPS tracking device to monitor someone’s movements in a car over a prolonged period is not a search because we have no expectations of privacy in our public movements.<a href="#_ftn4" name="_ftnref4">[4]</a> But in a visionary opinion in 2010, Judge Douglas Ginsburg of the U.S. Court of Appeals disagreed. Prolonged surveillance is a search, he recognized, because no reasonable person expects that his movements will be continuously monitored from door to door; all of us have a reasonable expectation of privacy in the “whole” of our movements in public. <a href="#_ftn5" name="_ftnref5">[5]</a> Ginsburg and his colleagues struck down the warrantless GPS surveillance of a suspect that lasted 24 hours a day for nearly a month on the grounds that prolonged, ubiquitous tracking of citizen’s movements in public is constitutionally unreasonable. “Unlike one’s movements during a single journey, the whole of one’s movements over the course of a month is not actually exposed to the public because the likelihood anyone will observe all those movements is effectively nil,” Ginsburg wrote. Moreover, “That whole reveals more – sometimes a great deal more – than does the sum of its parts.”<a href="#_ftn6" name="_ftnref6">[6]</a> Like the “mosaic theory” invoked by the government in national security cases, Ginsburg concluded that “Prolonged surveillance reveals types of information not revealed by short-term surveillance, such as what a person does repeatedly, what he does not do, and what he does ensemble.  These types of information can each reveal more about a person than does any individual trip viewed in isolation.”<a href="#_ftn7" name="_ftnref7">[7]</a> Ginsburg understood that 24/7 ubiquitous surveillance differs from more limited tracking not just in degree but in kind – it looks more like virtual stalking than a legitimate investigation – and therefore is an unreasonable search of the person. </p>
    <p>Because prolonged surveillance on “Open Planet” potentially reveals far more about each of us than 24/7 GPS tracking does, providing real time images of all our actions, rather than simply tracking the movements of our cars, it could also be struck down as an unreasonable search of our persons. And if the Supreme Court struck down Open Planet on Fourth Amendment grounds, it might be influenced by the state regulations of GPS surveillance that Ginsburg found persuasive, or by Congressional attempts to regulate Facebook or other forms of 24/7 surveillance, such as the Geolocational Privacy and Surveillance Act proposed by Sen. Ron Wyden (D-OR) that would require officers to get a warrant before electronically tracking cell phones or cars.<a href="#_ftn8" name="_ftnref8">[8]</a></p>
    <p>The Supreme Court in 2025 might also conceivably choose to strike down Open Planet on more expansive grounds, relying not just on the Fourth Amendment, but on the right to autonomy recognized in cases like <i>Casey v. Planned Parenthood</i> and <i>Lawrence v. Texas</i>. The right to privacy cases, beginning with <i>Griswold v. Connecticut</i> and culminating in <i>Roe v. Wade</i> and <i>Lawrence</i>, are often viewed as cases about sexual autonomy, but in <i>Casey</i> and <i>Lawrence</i>, Justice Anthony Kennedy recognized a far more sweeping principle of personal autonomy that might well protect individuals from totalizing forms of ubiquitous surveillance. Imagine an opinion written in 2025 by Justice Kennedy, still ruling the Court and the country at the age of 89. “In our tradition the State is not omnipresent in the home. And there are other spheres of our lives and existence, outside the home, where the State should not be a dominant presence,” Kennedy wrote in <i>Lawrence</i>. “Freedom extends beyond spatial bounds. Liberty presumes an autonomy of self that includes freedom of thought, belief, expression, and certain intimate conduct.”<a href="#_ftn9" name="_ftnref9">[9]</a> Kennedy’s vision of an “autonomy of self” that depends on preventing the state from becoming a “dominant presence” in public as well as private places might well be invoked to prevent the state from participating in a ubiquitous surveillance system that prevents citizens from defining themselves and expressing their individual identities. Just as citizens in the Soviet Union were inhibited from expressing and defining themselves by ubiquitous KGB surveillance, Kennedy might hold, the possibility of ubiquitous surveillance on “Open Planet” also violates the right to autonomy, even if the cameras in question are owned by the private sector, as well as the state, and a private corporation provides the platform for their monitoring.  Nevertheless, the fact that the system is administered by Facebook, rather than the Government, might be an obstacle to a constitutional ruling along these lines. And if Kennedy (or his successor) struck down “Open Planet” with a sweeping vision of personal autonomy that didn’t coincide with the actual values of a majority of citizens in 2025, the decision could be the <i>Roe</i> of virtual surveillance, provoking backlashes from those who don’t want the Supreme Court imposing its values on a divided nation. </p>
    <p>Would the Supreme Court, in fact, strike down “Open Planet” in 2025? If the past is any guide, the answer may depend on whether the public, in 2025, views 24/7 ubiquitous surveillance as invasive and unreasonable, or whether citizens have become so used to ubiquitous surveillance on and off the web, in virtual space and real space, that the public demands “Open Planet” rather than protesting against it. I don’t mean to suggest that the Court actually reads the polls. But in the age of Google and Facebook, technologies that thoughtfully balance privacy with free expression and other values have tended to be adopted only when companies see their markets as demanding some kind of privacy protection, or when engaged constituencies have mobilized in protest against poorly designed architectures and demanded better ones, helping to create a social consensus that the invasive designs are unreasonable. </p>
    <p>The paradigmatic case of the kind of political mobilization on behalf of constitutional values that I have in mind is presented by my second case: the choice between the naked machine and the blob machine in airport security screening. In 2002, officials at Orlando International airport first began testing the millimeter wave body scanners that are currently at the center of a national uproar. The designers of the scanners at Pacific Northwest Laboratories offered U.S. officials a choice: naked machines or blob machines? The same researchers had developed both technologies, and both were equally effective at identifying contraband. But, as their nicknames suggest, the former displays graphic images of the human body, while the latter scrambles the images into a non-humiliating blob.<a href="#_ftn10" name="_ftnref10">[10]</a></p>
    <p>Since both versions of the scanners promise the same degree of security, any sane attempt to balance privacy and safety would seem to favor the blob machines over the naked machines. And that’s what European governments chose. Most European airport authorities have declined to adopt body scanners at all, because of persuasive evidence that they’re not effective at detecting low-density contraband such as the chemical powder PETN that the trouser bomber concealed in his underwear on Christmas day, 2009. But the handful of European airports that have adopted body scanners, such as Schiphol airport in Amsterdam, have opted for a version of the blob machine. This is in part due to the efforts of European privacy commissioners, such as Germany’s Peter Schaar, who have emphasized the importance of designing body scanners in ways that protect privacy. </p>
    <p>The U.S. Department of Homeland Security made a very different choice. It deployed the naked body scanners without any opportunity for public comment—then appeared surprised by the backlash. Remarkably, however, the backlash was effective. After a nationwide protest inspired by the Patrick Henry of the anti-Naked Machines movement, a traveler who memorably exclaimed “Don’t Touch my Junk,” President Obama called on the TSA to go back to the drawing board. And a few months after authorizing the intrusive pat downs, in February 2011, the TSA announced that it would begin testing, on a pilot basis, versions of the very same blob machines that the agency had rejected nearly a decade earlier. According to the latest version, to be tested in Las Vegas and Washington, D.C, the TSA will install software filters on its body scanner machines that detects potential threat items and indicates their location on a generic, blob like outline of each passenger that will appear on a monitor attached to the machine. Passengers without suspicious items will be cleared as “OK,” those with suspicious items will be taken aside for additional screening. The remote rooms in which TSA agents view images of the naked body will be eliminated. According to news reports, TSA began testing the filtering software in the fall of 2010 – precisely when the protests against the naked machines went viral. If the filtering software is implemented across the country, converting naked machines into blob machines, the political victory for privacy will be striking. </p>
    <p>Of course, it’s possible that courts might strike down the naked machines as unreasonable and unconstitutional, even without the political protests. In a 1983 opinion upholding searches by drug-sniffing dogs, Justice Sandra Day O’Connor recognized that a search is most likely to be considered constitutionally reasonable if it is very effective at discovering contraband without revealing innocent but embarrassing information.<a href="#_ftn11" name="_ftnref11">[11]</a> The backscatter machines seem, under O'Connor's view, to be the antithesis of a reasonable search: They reveal a great deal of innocent but embarrassing information and are remarkably ineffective at revealing low-density contraband.</p>
    <p>It’s true that the government gets great deference in airports and at the borders, where routine border searches don’t require heightened suspicion. But the Court has held that non-routine border searches, such as body cavity or strip searches, do require a degree of individual suspicion.  And although the Supreme Court hasn't evaluated airport screening technology, lower courts have emphasized, as the U.S. Court of Appeals for the 9th Circuit ruled in 2007, that "a particular airport security screening search is constitutionally reasonable provided that it 'is no more extensive nor intensive than necessary, in the light of current technology, to detect the presence of weapons or explosives.'"<a href="#_ftn12" name="_ftnref12">[12]</a> </p>
    <p>It’s arguable that since the naked machines are neither effective nor minimally intrusive – that is, because they might be designed with blob machine like filters that promise just as much security while also protecting privacy – that courts might strike them down. As a practical matter, however, both lower courts and the Supreme Court seem far more likely to strike down strip searches that have inspired widespread public opposition – such as the strip search of a high school girl wrongly accused of carrying drugs, which the Supreme Court invalidated by a vote of 8-1,<a href="#_ftn13" name="_ftnref13">[13]</a> then they are of searches that, despite the protests of a mobilized minority, the majority of the public appears to accept. </p>
    <p>The tentative victory of the blob machines over the naked machines, if it materializes, provides a model for successful attempts to balance privacy and security: government can be pressured into striking a reasonable balance between privacy and security by a mobilized minority of the public when the privacy costs of a particular technology are dramatic, visible, widely distributed, and people experience the invasions personally as a kind of loss of control over the conditions of their own exposure. </p>
    <p>But can we be mobilized to demand a similarly reasonable balance when the threats to privacy come not from the government but from private corporations and when those responsible for exposing too much personal information about us are none other than ourselves? When it comes to invasions of privacy by fellow citizens, rather than by the government, we are in the realm not of autonomy but of dignity and decency. (Autonomy preserves a sphere of immunity from government intrusion in our lives; dignity protects the norms of social respect that we accord to each other.) And since dignity is a socially constructed value, it’s unlikely to be preserved by judges--or by private corporations--in the face of the expressed preferences of citizens who are less concerned about dignity than exposure. </p>
    <p>This is the subject of our third case, which involves a challenge that, in big and small ways, is confronting millions of people around the globe: how best to live our lives in a world where the Internet records everything and forgets nothing—where every online photo, status update, <a href="http://topics.nytimes.com/top/news/business/companies/twitter/index.html?inline=nyt-org">Twitter</a> post and blog entry by and about us can be stored forever.<a href="#_ftn14" name="_ftnref14">[14]</a> Consider the case of Stacy Snyder. Four years ago, Snyder, then a 25-year-old teacher in training at Conestoga Valley High School in Lancaster, Pa., posted a photo on her <a href="http://topics.nytimes.com/top/news/business/companies/myspace_com/index.html?inline=nyt-org">MySpace</a> page that showed her at a party wearing a pirate hat and drinking from a plastic cup, with the caption “Drunken Pirate.” After discovering the page, her supervisor at the high school told her the photo was “unprofessional,” and the dean of Millersville University School of Education, where Snyder was enrolled, said she was promoting drinking in virtual view of her under-age students. As a result, days before Snyder’s scheduled graduation, the university denied her a teaching degree. Snyder sued, arguing that the university had violated her First Amendment rights by penalizing her for her (perfectly legal) after-hours behavior. But in 2008, a federal district judge rejected the claim, saying that because Snyder was a public employee whose photo didn’t relate to matters of public concern, her “Drunken Pirate” post was not protected speech.<a href="#_ftn15" name="_ftnref15">[15]</a></p>
    <p>When historians of the future look back on the perils of the early digital age, Stacy Snyder may well be an icon. With Web sites like LOL Facebook Moments, which collects and shares embarrassing personal revelations from Facebook users, ill-advised photos and online chatter are coming back to haunt people months or years after the fact. </p>
    <p>Technological advances, of course, have often presented new threats to privacy. In 1890, in perhaps the most famous article on privacy ever written, Samuel Warren and Louis Brandeis complained that because of new technology — like the <a href="http://topics.nytimes.com/top/news/business/companies/eastman_kodak_company/index.html?inline=nyt-org">Kodak</a> camera and the tabloid press — “gossip is no longer the resource of the idle and of the vicious but has become a trade.”<a href="#_ftn16" name="_ftnref16">[16]</a> But the mild society gossip of the Gilded Age pales before the volume of revelations contained in the photos, video and chatter on social-media sites and elsewhere across the Internet. Facebook, which surpassed MySpace in 2008 as the largest social-networking site, now has more than 500 million members, or 22 percent of all Internet users, who spend more than 500 billion minutes a month on the site. Facebook users share more than 25 billion pieces of content each month (including news stories, blog posts and photos), and the average user creates 70 pieces of content a month. </p>
    <p>Today, as in Brandeis’s day, the value threatened by gossip on the Internet – whether posted by us our by others – is dignity. (Brandeis called it an offense against honor.) But American law has never been good at regulating offenses against dignity – especially when regulations would clash with other values, such as protections for free speech. And indeed, the most ambitious proposals in Europe to create new legal rights to escape your past on the Internet are very hard to reconcile with the American free speech tradition. </p>
    <p>The cautionary tale here is Argentina, which has dramatically expanded the liability of search engines like Google and Yahoo for offensive photographs that harm someone’s reputation. Recently, an Argentinean judge held Google and Yahoo liable for causing “moral harm” and violating the privacy of Virginia Da Cunha, a pop star, by indexing pictures of her that were linked to erotic content. The ruling against Google and Yahoo was overturned on appeal in August, but there are at least 130 similar cases pending in Argentina to force search engines to remove or block offensive content. In the U.S., search engines are protected by the Communications Decency Act, which immunizes Internet service providers from hosting content posted by third parties. But as liability against search engines expands abroad, it will seriously curtain free speech:  Yahoo says that the only way to comply with injunctions about is to block all sites that refer to a particular plaintiff.<a href="#_ftn17" name="_ftnref17">[17]</a></p>
    <p>In Europe, recent proposals to create a legally enforceable right to escape your past have come from the French. The French data commissioner, Alex Turc, who has proposed a right to oblivion – namely a right to escape your past on the Internet. The details are fuzzy, but it appears that the proposal would rely on an international body – say a commission of forgetfulness – to evaluate particular take down requests and order Google and Facebook to remove content that, in the view of commissioners, violated an individuals’ dignitary rights. </p>
    <p>From an American perspective, the very intrusiveness of this proposal is enough to make it implausible: how could we rely on bureaucrats to protect our dignity in cases where we have failed to protect it on our own? Europeans, who have less of a free speech tradition and far more of a tradition of allowing people to remove photographs taken and posted against their will, will be more sympathetic to the proposal. But from the perspective of most American courts and companies, giving people the right selectively to delete their pasts from public discourse would pose unacceptably great threats to free speech. </p>
    <p>A far more promising solution to the problem of forgetting on the Internet is technological. And there are already small-scale privacy apps that offer disappearing data. An app called TigerText allows text-message senders to set a time limit from one minute to 30 days, after which the text disappears from the company’s servers, on which it is stored, and therefore, from the senders’ and recipients’ phones. (The founder of TigerText, Jeffrey Evans, has said he chose the name before the scandal involving <a href="http://topics.nytimes.com/top/reference/timestopics/people/w/tiger_woods/index.html?inline=nyt-per">Tiger Woods</a>’s supposed texts to a mistress.)<a href="#_ftn18" name="_ftnref18">[18]</a></p>
    <p>Expiration dates could be implemented more broadly in various ways. Researchers at the <a href="http://topics.nytimes.com/top/reference/timestopics/organizations/u/university_of_washington/index.html?inline=nyt-org">University of Washington</a>, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read. The technology doesn’t promise perfect control — you can’t stop someone from copying your photos or Facebook chats during the period in which they are not encrypted. But as Vanish improves, it could bring us much closer to a world where our data don’t linger forever.</p>
    <p>Facebook, if it wanted to, could implement expiration dates on its own platform, making our data disappear after, say, three days or three months unless a user specified that he wanted it to linger forever. It might be a more welcome option for Facebook to encourage the development of Vanish-style apps that would allow individual users who are concerned about privacy to make their own data disappear without imposing the default on all Facebook users.</p>
    <p>So far, however, Zuckerberg, Facebook’s C.E.O., has been moving in the opposite direction — toward transparency, rather than privacy. In defending Facebook’s recent decision to make the default for profile information about friends and relationship status public, Zuckerberg told the founder of the publication TechCrunch that Facebook had an obligation to reflect “current social norms” that favored exposure over privacy. “People have really gotten comfortable not only sharing more information and different kinds but more openly and with more people, and that social norm is just something that has evolved over time,” <a href="#_ftn19" name="_ftnref19">[19]</a> he said.</p>
    <p>It’s true that a German company, X-Pire, recently announced the launch of a Facebook app that will allow users automatically to erase designated photos. Using electronic keys that expire after short periods of time, and obtained by solving a Captcha, or graphic that requires users to type in a fixed number combinations, the application ensures that once the time stamp on the photo has expired, the key disappears.<a href="#_ftn20" name="_ftnref20">[20]</a> X-Pire is a model for a sensible, blob-machine-like solution to the problem of digital forgetting. But unless Facebook builds X-Pire-like apps into its platform – an unlikely outcome given its commercial interests – a majority of Facebook users are unlikely to seek out disappearing data options until it’s too late. X-Pire, therefore, may remain for the foreseeable future a technological solution to a grave privacy problem—but a solution that doesn’t have an obvious market. </p>
    <p>The courts, in my view, are better equipped to regulate offenses against autonomy, such as 24/7 surveillance on Facebook, than offenses against dignity, such as drunken Facebook pictures that never go away. But that regulation in both cases will likely turn on evolving social norms whose contours in twenty years are hard to predict. </p>
    <p>Finally, let’s consider one last example of the challenge of preserving constitutional values in the age of Facebook and Google, an example that concerns not privacy but free speech.<a href="#_ftn21" name="_ftnref21">[21]</a> </p>
    <p>At the moment, the person who arguably has more power than any other to determine who may speak and who may be heard around the globe isn’t a king, president or Supreme Court justice. She is Nicole Wong, the deputy general counsel of Google, and her colleagues call her “The Decider.” It is Wong who decides what controversial user-generated content goes down or stays up on YouTube and other applications owned by Google, including Blogger, the blog site; Picasa, the photo-sharing site; and Orkut, the social networking site. Wong and her colleagues also oversee Google’s search engine: they decide what controversial material does and doesn’t appear on the local search engines that Google maintains in many countries in the world, as well as on Google.com. As a result, Wong and her colleagues arguably have more influence over the contours of online expression than anyone else on the planet.</p>
    <p>At the moment, Wong seems to be exercising that responsibility with sensitivity to the values of free speech. Google and Yahoo can be held liable outside the United States for indexing or directing users to content after having been notified that it was illegal in a foreign country. In the United States, by contrast, Internet service providers are protected from most lawsuits involving having hosted or linked to illegal user-generated content. As a consequence of these differing standards, Google has considerably less flexibility overseas than it does in the United States about content on its sites, and its “information must be free” ethos is being tested abroad.</p>
    <p>For example, on the German and French default Google search engines, Google.de and Google.fr, you can’t find Holocaust-denial sites that can be found on Google.com, because Holocaust denial is illegal in Germany and France. Broadly, Google has decided to comply with governmental requests to take down links on its national search engines to material that clearly violates national laws. But not every overseas case presents a clear violation of national law. In 2006, for example, protesters at a Google office in India demanded the removal of content on Orkut, the social networking site, that criticized Shiv Sena, a hard-line Hindu political party popular in Mumbai. Wong eventually decided to take down an Orkut group dedicated to attacking Shivaji, revered as a deity by the Shiv Sena Party, because it violated Orkut terms of service by criticizing a religion, but she decided not to take down another group because it merely criticized a political party. “If stuff is clearly illegal, we take that down, but if it’s on the edge, you might push a country a little bit,” Wong told me. “Free-speech law is always built on the edge, and in each country, the question is: Can you define what the edge is?”</p>
    <p>Over the past couple of years, Google and its various applications have been blocked, to different degrees, by 24 countries. Blogger is blocked in Pakistan, for example, and Orkut in Saudi Arabia. Meanwhile, governments are increasingly pressuring telecom companies like <a href="http://topics.nytimes.com/top/news/business/companies/comcast_corporation/index.html?inline=nyt-org">Comcast</a> and <a href="http://topics.nytimes.com/top/news/business/companies/verizon_communications_inc/index.html?inline=nyt-org">Verizon</a> to block controversial speech at the network level. Europe and the U.S. recently agreed to require Internet service providers to identify and block child pornography, and in Europe there are growing demands for network-wide blocking of terrorist-incitement videos. As a result, Wong and her colleagues worry that Google’s ability to make case-by-case decisions about what links and videos are accessible through Google’s sites may be slowly circumvented, as countries are requiring the companies that give us access to the Internet to build top-down censorship into the network pipes.</p>
    <p>It is not only foreign countries that are eager to restrict speech on Google and YouTube. In May, 2006, Joseph Lieberman who has become the A. Mitchell Palmer of the digital age, had his staff contacted Google and demanded that the company remove from YouTube dozens of what he described as jihadist videos. After viewing the videos one by one, Wong and her colleagues removed some of the videos but refused to remove those that they decided didn’t violate YouTube guidelines. Lieberman wasn’t satisfied. In an angry follow-up letter to Eric Schmidt, the C.E.O. of Google, Lieberman demanded that all content he characterized as being “produced by Islamist terrorist organizations” be immediately removed from YouTube as a matter of corporate judgment — even videos that didn’t feature hate speech or violent content or violate U.S. law. Wong and her colleagues responded by saying, “YouTube encourages free speech and defends everyone’s right to express unpopular points of view.” Recently, Google and YouTube announced new guidelines prohibiting videos “intended to incite violence.”</p>
    <p>That category scrupulously tracks the Supreme Court’s rigorous First Amendment doctrine, which says that speech can be banned only when it poses an imminent threat of producing serious lawless action. Unfortunately, Wong and her colleagues recently retreated from that bright line under further pressure from Lieberman. In November, 2010, YouTube added a new category that viewers can click to flag videos for removal: “promotes terrorism.” There are 24 hours of video uploaded on YouTube every minute, and a series of categories viewers can use to request removal, including “violent or repulsive content” or inappropriate sexual content. Although hailed by Senator Lieberman, the new “promotes terrorism category” is potentially troubling because it goes beyond the narrow test of incitement to violence that YouTube had previously used to flag terrorism related videos for removal. YouTube’s capitulation to Lieberman shows that a user generated system for enforcing community standards will never protect speech as scrupulously as unelected judges enforcing strict rules about when speech can be viewed as a form of dangerous conduct. </p>
    <p>Google remains a better guardian for free speech than internet companies like Facebook and Twitter, which have refused to join the Global Network Initiative, an industry-wide coalition committed to upholding free speech and privacy. But the recent capitulation of YouTube shows that Google’s “trust us” model may not be a stable way of protecting free speech in the twenty-first century, even though the alternatives to trusting Google – such as authorizing national regulatory bodies around the globe to request the removal of controversial videos – might protect less speech than Google’s “Decider” model currently does. </p>
    <p>I’d like to conclude by stressing the complexity of protecting constitutional values like privacy and free speech in the age of Google and Facebook, which are not formally constrained by the Constitution. In each of my examples – 24/7 Facebook surveillance, blob machines, escaping your Facebook past, and promoting free speech on YouTube and Google -- it’s possible to imagine a rule or technology that would protect free speech and privacy, while also preserving security—a blob-machine like solution. But in some areas, those blob-machine-like solutions are more likely, in practice, to be adopted then others. Engaged minorities may demand blob machines when they personally experience their own privacy being violated; but they may be less likely to rise up against the slow expansion of surveillance cameras, which transform expectations of privacy in public. Judges in the American system may be more likely to resist ubiquitous surveillance in the name of <i>Roe v. Wade</i>-style autonomy than they are to create a legal right to allow people to edit their Internet pasts, which relies on ideas of dignity that in turn require a social consensus that in America, at least, does not exist. As for free speech, it is being anxiously guarded for the moment by Google, but the tremendous pressures, from consumers and government are already making it hard to hold the line at removing only speech that threatens imminent lawless action. </p>In translating constitutional values in light of new technologies, it’s always useful to ask: What would Brandeis do? Brandeis would never have tolerated unpragmatic abstractions, which have the effect of giving citizens less privacy in the age of cloud computing than they had during the founding era. In translating the Constitution into the challenges of our time, Brandeis would have considered it a duty actively to engage in the project of constitutional translation in order to preserve the Framers’ values in a startlingly different technological world. But the task of translating constitutional values can’t be left to judges alone: it also falls to regulators, legislators, technologists, and, ultimately, to politically engaged citizens. As Brandeis put it, “If we would guide by the light of reason, we must let our minds be bold.” <div><br clear="all"><hr align="left" width="33%"><div id="ftn1"><p><a href="#_ftnref1" name="_ftn1">[1]</a> See <i>Florida v. Riley</i>, 488 U.S. 445 (1989) (O’Connor, J., concurring). <br><a href="#_ftnref2" name="_ftn2">[2]</a> See <i>United States v. Miller</i>, 425 U.S. 435 (1976).<br><a href="#_ftnref3" name="_ftn3">[3]</a> See <i>United States v. Knotts</i>, 460 U.S. 276, 283-4 (1983). <br><a href="#_ftnref4" name="_ftn4">[4]</a> See <i>United States v. Pineda-Morena</i>, 591 F.3d 1212 (9th Cir. 2010); <i>United States v. Garcia</i>, 474 F.3d 994 (7<sup>th</sup> Cir. 2007); <i>United States v. Marquez</i>, 605 F.3d 604 (8<sup>th</sup> Cir. 2010). <br><a href="#_ftnref5" name="_ftn5">[5]</a> See <i>United States v. Maynard</i>, 615 F.3d 544 (D.C. Cir 2010). <br><a href="#_ftnref6" name="_ftn6">[6]</a> 615 F.3d at 558.  <br><a href="#_ftnref7" name="_ftn7">[7]</a> Id. at 562.<br><a href="#_ftnref8" name="_ftn8">[8]</a> See Declan McCullagh, “Senator Pushes for Mobile Privacy Reform,” <i>CNet News</i>, March 22, 2011, available at <a href="http://m.news.com/2166-12_3-20045723-281.html">http://m.news.com/2166-12_3-20045723-281.html</a> <br><a href="#_ftnref9" name="_ftn9">[9]</a> <i>Lawrence v. Texas</i>, 539 U.S. 558, 562 (2003). <br><a href="#_ftnref10" name="_ftn10">[10]</a> The discussion of the blob machines is adapted from “Nude Breach,” <i>New Republic</i>, December 13, 2010. <br><a href="#_ftnref11" name="_ftn11">[11]</a> <i>United States v. Place</i>, 462 U.S. 696 (1983). <br><a href="#_ftnref12" name="_ftn12">[12]</a> <i>U.S. v. Davis</i>, 482 F.2d 893, 913 (9th Cir. 1973).<br><a href="#_ftnref13" name="_ftn13">[13]</a> <i>Safford Unified School District v. Redding</i>, 557 U.S. ___ (2009). <br><a href="#_ftnref14" name="_ftn14">[14]</a> The discussion of digital forgetting is adapted from “The End of Forgetting,” <i>New York Times Magazine</i>, July 25, 2010. <br><a href="#_ftnref15" name="_ftn15">[15]</a><i>Snyder v. Millersville University</i>, No. 07-1660 (E.D. Pa. Dec. 3, 2008). <br><a href="#_ftnref16" name="_ftn16">[16]</a> Brandeis and Warren, “The Right to Privacy,” 4 Harv. L. Rev. 193 (1890).<br><a href="#_ftnref17" name="_ftn17">[17]</a> Vinod Sreeharsha, Google and Yahoo Win Appeal in Argentine Case, <i>N.Y.  Times</i>, August 20, 2010, B4.<br><a href="#_ftnref18" name="_ftn18">[18]</a> See Belinda Luscombe, “Tiger Text: An iPhone App for Cheating Spouses?”, <i>Time.com</i>, Feb. 26, 2010, available at <a href="http://www.time.com/time/business/article/0,8599,1968233,00.html">http://www.time.com/time/business/article/0,8599,1968233,00.html</a> <br><a href="#_ftnref19" name="_ftn19">[19]</a>Marshall Kirkpatrick, “Facebook’s Zuckerbeg Says the Age of Privacy Is Over,” <i>ReadWriteWeb.com</i>, January 9, 2010, available at <a href="http://www.readwriteweb.com/archives/facebooks_zuckerberg_says_the_age_of_privacy_is_ov.php">http://www.readwriteweb.com/archives/facebooks_zuckerberg_says_the_age_of_privacy_is_ov.php</a> <br><a href="#_ftnref20" name="_ftn20">[20]</a> Aemon Malone, “X-Pire Aims to Cut down on Photo D-Tagging on Facebook,” <i>Digital Trends.com</i>, January 17, 2011, available at <a href="http://www.digitaltrends.com/social-media/x-pire-adds-expiration-date-to-digital-photos/">http://www.digitaltrends.com/social-media/x-pire-adds-expiration-date-to-digital-photos/</a> <br><a href="#_ftnref21" name="_ftn21">[21]</a> The discussion of free speech that follows is adapted from “Google’s Gatekeepers,” <i>New York Times Magazine</i>, November 30, 2008.</p></div></div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2011/5/02-free-speech-rosen/0502_free_speech_rosen.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li><a href="http://www.brookings.edu/experts/rosenj?view=bio">Jeffrey Rosen</a></li>
		</ul>
	</div><div>
		Image Source: David Malan
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487888/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fs%2fsk%2520so%2fsocial_connections001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Mon, 02 May 2011 00:00:00 -0400</pubDate><dc:creator>Jeffrey Rosen</dc:creator>
<itunes:summary> 
Introduction
It&#xA0;was 2025 when Facebook decided to post live feeds from public and private surveillance cameras, so they could be searched online. The decision hardly came as a surprise. Ever since Facebook passed the 500 million-member mark in 2010, it found increasing consumer demand for applications that allowed users to access surveillance cameras with publicly accessible IP addresses. (Initially, live feeds to cameras on Mexican beaches were especially popular.) But in the mid-2020s, popular demand for live surveillance camera feeds were joined by demands from the U.S. government that an open circuit television network would be invaluable in tracking potential terrorists. As a result, Facebook decided to link the public and private camera networks, post them live online, and store the video feeds without restrictions on distributed servers in the digital cloud. 
Once the new open circuit system went live, anyone in the world could log onto the Internet, select a particular street view on Facebook maps and zoom in on a particular individual. Anyone could then back click on that individual to retrace her steps since she left the house in the morning or forward click on her to see where she was headed in the future. Using Facebook&#x2019;s integrated face recognition app, users could click on a stranger walking down any street in the world, plug her image into the Facebook database to identify her by name, and then follow her movements from door-to-door. Since cameras were virtually ubiquitous in public and commercial spaces, the result was the possibility of ubiquitous identification and surveillance of all citizens virtually anywhere in the world&#x2014;and by anyone. In an enthusiastic launch, Mark Zuckerberg dubbed the new 24/7 ubiquitous surveillance system &#8220;Open Planet.&#8221; 
Open Planet is not a technological fantasy. Most of the architecture for implementing it already exists, and it would be a simple enough task for Facebook or Google, if the companies chose, to get the system up and running: face recognition is already plausible, storage is increasing exponentially; and the only limitation is the coverage and scope of the existing cameras, which are growing by the day. Indeed, at a legal Futures Conference at Stanford in 2007, Andrew McLaughlin, then the head of public policy at Google, said he expected Google to get requests to put linked surveillance networks live and online within the decade. How, he, asked the audience of scholars and technologists, should Google respond? 
If &#8220;Open Planet&#8221; went live, would it violate the Constitution? The answer is that it might not under Supreme Court doctrine as it now exists&#x2014;at least not if it were a purely-private affair, run by private companies alone and without government involvement. Both the First Amendment, which protects free speech, and the Fourth Amendment, which prohibits unreasonable searches and seizures, only restrict actions by the government. On the other hand, if the government directed Open Planet&#x2019;s creation or used it to track citizens on government-owned, as well as private-sector, cameras, perhaps Facebook might be viewed as the equivalent of a state actor, and therefore restricted by the Constitution. 
At the time of the framing of the Constitution, a far less intrusive invasion of privacy &#x2013; namely, the warrantless search of private homes and desk drawers for seditious papers &#x2013; was considered the paradigmatic case of an unreasonable and unconstitutional invasion of privacy. The fact that 24/7 ubiquitous surveillance may not violate the Constitution today suggests the challenge of translating the framers&#x2019; values into a world in which Google and Facebook now have far more power over the privacy and free speech of most citizens than any King, president, or Supreme Court justice. In this essay, I will examine four different areas where the era of Facebook and Google will challenge our ... </itunes:summary>
<itunes:subtitle>Introduction
It&#xA0;was 2025 when Facebook decided to post live feeds from public and private surveillance cameras, so they could be searched online. The decision hardly came as a surprise. Ever since Facebook passed the 500 million-member mark in ... </itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/s/sk%20so/social_connections001_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>Introduction</b></p><p><p>It was 2025 when Facebook decided to post live feeds from public and private surveillance cameras, so they could be searched online. The decision hardly came as a surprise. Ever since Facebook passed the 500 million-member mark in 2010, it found increasing consumer demand for applications that allowed users to access surveillance cameras with publicly accessible IP addresses. (Initially, live feeds to cameras on Mexican beaches were especially popular.) But in the mid-2020s, popular demand for live surveillance camera feeds were joined by demands from the U.S. government that an open circuit television network would be invaluable in tracking potential terrorists. As a result, Facebook decided to link the public and private camera networks, post them live online, and store the video feeds without restrictions on distributed servers in the digital cloud.</p>
    <p>Once the new open circuit system went live, anyone in the world could log onto the Internet, select a particular street view on Facebook maps and zoom in on a particular individual. Anyone could then back click on that individual to retrace her steps since she left the house in the morning or forward click on her to see where she was headed in the future. Using Facebook’s integrated face recognition app, users could click on a stranger walking down any street in the world, plug her image into the Facebook database to identify her by name, and then follow her movements from door-to-door. Since cameras were virtually ubiquitous in public and commercial spaces, the result was the possibility of ubiquitous identification and surveillance of all citizens virtually anywhere in the world—and by anyone. In an enthusiastic launch, Mark Zuckerberg dubbed the new 24/7 ubiquitous surveillance system “Open Planet.”</p>
    <p>Open Planet is not a technological fantasy. Most of the architecture for implementing it already exists, and it would be a simple enough task for Facebook or Google, if the companies chose, to get the system up and running: face recognition is already plausible, storage is increasing exponentially; and the only limitation is the coverage and scope of the existing cameras, which are growing by the day. Indeed, at a legal Futures Conference at Stanford in 2007, Andrew McLaughlin, then the head of public policy at Google, said he expected Google to get requests to put linked surveillance networks live and online within the decade. How, he, asked the audience of scholars and technologists, should Google respond? </p>
    <p>If “Open Planet” went live, would it violate the Constitution? The answer is that it might not under Supreme Court doctrine as it now exists—at least not if it were a purely-private affair, run by private companies alone and without government involvement. Both the First Amendment, which protects free speech, and the Fourth Amendment, which prohibits unreasonable searches and seizures, only restrict actions by the government. On the other hand, if the government directed Open Planet’s creation or used it to track citizens on government-owned, as well as private-sector, cameras, perhaps Facebook might be viewed as the equivalent of a state actor, and therefore restricted by the Constitution.</p>
    <p>At the time of the framing of the Constitution, a far less intrusive invasion of privacy – namely, the warrantless search of private homes and desk drawers for seditious papers – was considered the paradigmatic case of an unreasonable and unconstitutional invasion of privacy. The fact that 24/7 ubiquitous surveillance may not violate the Constitution today suggests the challenge of translating the framers’ values into a world in which Google and Facebook now have far more power over the privacy and free speech of most citizens than any King, president, or Supreme Court justice. In this essay, I will examine four different areas where the era of Facebook and Google will challenge our existing ideas about constitutional protections for free speech and privacy: ubiquitous surveillance with GPS devices and online surveillance cameras; airport body scanners; embarrassing Facebook photos and the problem of digital forgetting; and controversial YouTube videos. In each area, I will suggest, preserving constitutional values requires a different balance of legal and technological solutions, combined with political mobilization that leads to changes in social norms. </p>
    <p>Let’s start with Open Planet, and imagine sufficient government involvement to make the courts plausibly consider Facebook’s program the equivalent of state action. Imagine also that the Supreme Court in 2025 were unsettled by Open Planet and inclined to strike it down. A series of other doctrines might bar judicial intervention. The Court has come close to saying that we have no legitimate expectations of privacy in public places, at least when the surveillance technologies in question are in general public use by ordinary members of the public.<a href="#_ftn1" name="_ftnref1">[1]</a>  As mobile camera technology becomes ubiquitous, the Court might hold that the government is entitled to have access to the same linked camera system that ordinary members of the public have become accustomed to browsing. Moreover, the Court has said that we have no expectation of privacy in data that we voluntarily surrender to third parties.<a href="#_ftn2" name="_ftnref2">[2]</a> In cases where digital images are captured on cameras owned by third parties and stored in the digital cloud—that is, on distributed third party servers--we have less privacy than citizens took for granted at the time of the American founding. And although the founders expected a degree of anonymity in public, that expectation would be defeated by the possibility of 24/7 surveillance on Facebook. </p>
    <p>The doctrinal seeds of a judicial response to Open Planet, however, do exist. A Supreme Court inclined to strike down ubiquitous surveillance might draw on recent cases involving decisions by the police to place a GPS tracking device on the car of a suspect without a warrant, tracking his movements 24/7. The Supreme Court has not yet decided whether prolonged surveillance, in the form of “dragnet-type law enforcement practices” violates the Constitution.<a href="#_ftn3" name="_ftnref3">[3]</a> Three federal circuits have held that the use of a GPS tracking device to monitor someone’s movements in a car over a prolonged period is not a search because we have no expectations of privacy in our public movements.<a href="#_ftn4" name="_ftnref4">[4]</a> But in a visionary opinion in 2010, Judge Douglas Ginsburg of the U.S. Court of Appeals disagreed. Prolonged surveillance is a search, he recognized, because no reasonable person expects that his movements will be continuously monitored from door to door; all of us have a reasonable expectation of privacy in the “whole” of our movements in public. <a href="#_ftn5" name="_ftnref5">[5]</a> Ginsburg and his colleagues struck down the warrantless GPS surveillance of a suspect that lasted 24 hours a day for nearly a month on the grounds that prolonged, ubiquitous tracking of citizen’s movements in public is constitutionally unreasonable. “Unlike one’s movements during a single journey, the whole of one’s movements over the course of a month is not actually exposed to the public because the likelihood anyone will observe all those movements is effectively nil,” Ginsburg wrote. Moreover, “That whole reveals more – sometimes a great deal more – than does the sum of its parts.”<a href="#_ftn6" name="_ftnref6">[6]</a> Like the “mosaic theory” invoked by the government in national security cases, Ginsburg concluded that “Prolonged surveillance reveals types of information not revealed by short-term surveillance, such as what a person does repeatedly, what he does not do, and what he does ensemble.  These types of information can each reveal more about a person than does any individual trip viewed in isolation.”<a href="#_ftn7" name="_ftnref7">[7]</a> Ginsburg understood that 24/7 ubiquitous surveillance differs from more limited tracking not just in degree but in kind – it looks more like virtual stalking than a legitimate investigation – and therefore is an unreasonable search of the person. </p>
    <p>Because prolonged surveillance on “Open Planet” potentially reveals far more about each of us than 24/7 GPS tracking does, providing real time images of all our actions, rather than simply tracking the movements of our cars, it could also be struck down as an unreasonable search of our persons. And if the Supreme Court struck down Open Planet on Fourth Amendment grounds, it might be influenced by the state regulations of GPS surveillance that Ginsburg found persuasive, or by Congressional attempts to regulate Facebook or other forms of 24/7 surveillance, such as the Geolocational Privacy and Surveillance Act proposed by Sen. Ron Wyden (D-OR) that would require officers to get a warrant before electronically tracking cell phones or cars.<a href="#_ftn8" name="_ftnref8">[8]</a></p>
    <p>The Supreme Court in 2025 might also conceivably choose to strike down Open Planet on more expansive grounds, relying not just on the Fourth Amendment, but on the right to autonomy recognized in cases like <i>Casey v. Planned Parenthood</i> and <i>Lawrence v. Texas</i>. The right to privacy cases, beginning with <i>Griswold v. Connecticut</i> and culminating in <i>Roe v. Wade</i> and <i>Lawrence</i>, are often viewed as cases about sexual autonomy, but in <i>Casey</i> and <i>Lawrence</i>, Justice Anthony Kennedy recognized a far more sweeping principle of personal autonomy that might well protect individuals from totalizing forms of ubiquitous surveillance. Imagine an opinion written in 2025 by Justice Kennedy, still ruling the Court and the country at the age of 89. “In our tradition the State is not omnipresent in the home. And there are other spheres of our lives and existence, outside the home, where the State should not be a dominant presence,” Kennedy wrote in <i>Lawrence</i>. “Freedom extends beyond spatial bounds. Liberty presumes an autonomy of self that includes freedom of thought, belief, expression, and certain intimate conduct.”<a href="#_ftn9" name="_ftnref9">[9]</a> Kennedy’s vision of an “autonomy of self” that depends on preventing the state from becoming a “dominant presence” in public as well as private places might well be invoked to prevent the state from participating in a ubiquitous surveillance system that prevents citizens from defining themselves and expressing their individual identities. Just as citizens in the Soviet Union were inhibited from expressing and defining themselves by ubiquitous KGB surveillance, Kennedy might hold, the possibility of ubiquitous surveillance on “Open Planet” also violates the right to autonomy, even if the cameras in question are owned by the private sector, as well as the state, and a private corporation provides the platform for their monitoring.  Nevertheless, the fact that the system is administered by Facebook, rather than the Government, might be an obstacle to a constitutional ruling along these lines. And if Kennedy (or his successor) struck down “Open Planet” with a sweeping vision of personal autonomy that didn’t coincide with the actual values of a majority of citizens in 2025, the decision could be the <i>Roe</i> of virtual surveillance, provoking backlashes from those who don’t want the Supreme Court imposing its values on a divided nation. </p>
    <p>Would the Supreme Court, in fact, strike down “Open Planet” in 2025? If the past is any guide, the answer may depend on whether the public, in 2025, views 24/7 ubiquitous surveillance as invasive and unreasonable, or whether citizens have become so used to ubiquitous surveillance on and off the web, in virtual space and real space, that the public demands “Open Planet” rather than protesting against it. I don’t mean to suggest that the Court actually reads the polls. But in the age of Google and Facebook, technologies that thoughtfully balance privacy with free expression and other values have tended to be adopted only when companies see their markets as demanding some kind of privacy protection, or when engaged constituencies have mobilized in protest against poorly designed architectures and demanded better ones, helping to create a social consensus that the invasive designs are unreasonable. </p>
    <p>The paradigmatic case of the kind of political mobilization on behalf of constitutional values that I have in mind is presented by my second case: the choice between the naked machine and the blob machine in airport security screening. In 2002, officials at Orlando International airport first began testing the millimeter wave body scanners that are currently at the center of a national uproar. The designers of the scanners at Pacific Northwest Laboratories offered U.S. officials a choice: naked machines or blob machines? The same researchers had developed both technologies, and both were equally effective at identifying contraband. But, as their nicknames suggest, the former displays graphic images of the human body, while the latter scrambles the images into a non-humiliating blob.<a href="#_ftn10" name="_ftnref10">[10]</a></p>
    <p>Since both versions of the scanners promise the same degree of security, any sane attempt to balance privacy and safety would seem to favor the blob machines over the naked machines. And that’s what European governments chose. Most European airport authorities have declined to adopt body scanners at all, because of persuasive evidence that they’re not effective at detecting low-density contraband such as the chemical powder PETN that the trouser bomber concealed in his underwear on Christmas day, 2009. But the handful of European airports that have adopted body scanners, such as Schiphol airport in Amsterdam, have opted for a version of the blob machine. This is in part due to the efforts of European privacy commissioners, such as Germany’s Peter Schaar, who have emphasized the importance of designing body scanners in ways that protect privacy. </p>
    <p>The U.S. Department of Homeland Security made a very different choice. It deployed the naked body scanners without any opportunity for public comment—then appeared surprised by the backlash. Remarkably, however, the backlash was effective. After a nationwide protest inspired by the Patrick Henry of the anti-Naked Machines movement, a traveler who memorably exclaimed “Don’t Touch my Junk,” President Obama called on the TSA to go back to the drawing board. And a few months after authorizing the intrusive pat downs, in February 2011, the TSA announced that it would begin testing, on a pilot basis, versions of the very same blob machines that the agency had rejected nearly a decade earlier. According to the latest version, to be tested in Las Vegas and Washington, D.C, the TSA will install software filters on its body scanner machines that detects potential threat items and indicates their location on a generic, blob like outline of each passenger that will appear on a monitor attached to the machine. Passengers without suspicious items will be cleared as “OK,” those with suspicious items will be taken aside for additional screening. The remote rooms in which TSA agents view images of the naked body will be eliminated. According to news reports, TSA began testing the filtering software in the fall of 2010 – precisely when the protests against the naked machines went viral. If the filtering software is implemented across the country, converting naked machines into blob machines, the political victory for privacy will be striking. </p>
    <p>Of course, it’s possible that courts might strike down the naked machines as unreasonable and unconstitutional, even without the political protests. In a 1983 opinion upholding searches by drug-sniffing dogs, Justice Sandra Day O’Connor recognized that a search is most likely to be considered constitutionally reasonable if it is very effective at discovering contraband without revealing innocent but embarrassing information.<a href="#_ftn11" name="_ftnref11">[11]</a> The backscatter machines seem, under O'Connor's view, to be the antithesis of a reasonable search: They reveal a great deal of innocent but embarrassing information and are remarkably ineffective at revealing low-density contraband.</p>
    <p>It’s true that the government gets great deference in airports and at the borders, where routine border searches don’t require heightened suspicion. But the Court has held that non-routine border searches, such as body cavity or strip searches, do require a degree of individual suspicion.  And although the Supreme Court hasn't evaluated airport screening technology, lower courts have emphasized, as the U.S. Court of Appeals for the 9th Circuit ruled in 2007, that "a particular airport security screening search is constitutionally reasonable provided that it 'is no more extensive nor intensive than necessary, in the light of current technology, to detect the presence of weapons or explosives.'"<a href="#_ftn12" name="_ftnref12">[12]</a> </p>
    <p>It’s arguable that since the naked machines are neither effective nor minimally intrusive – that is, because they might be designed with blob machine like filters that promise just as much security while also protecting privacy – that courts might strike them down. As a practical matter, however, both lower courts and the Supreme Court seem far more likely to strike down strip searches that have inspired widespread public opposition – such as the strip search of a high school girl wrongly accused of carrying drugs, which the Supreme Court invalidated by a vote of 8-1,<a href="#_ftn13" name="_ftnref13">[13]</a> then they are of searches that, despite the protests of a mobilized minority, the majority of the public appears to accept. </p>
    <p>The tentative victory of the blob machines over the naked machines, if it materializes, provides a model for successful attempts to balance privacy and security: government can be pressured into striking a reasonable balance between privacy and security by a mobilized minority of the public when the privacy costs of a particular technology are dramatic, visible, widely distributed, and people experience the invasions personally as a kind of loss of control over the conditions of their own exposure. </p>
    <p>But can we be mobilized to demand a similarly reasonable balance when the threats to privacy come not from the government but from private corporations and when those responsible for exposing too much personal information about us are none other than ourselves? When it comes to invasions of privacy by fellow citizens, rather than by the government, we are in the realm not of autonomy but of dignity and decency. (Autonomy preserves a sphere of immunity from government intrusion in our lives; dignity protects the norms of social respect that we accord to each other.) And since dignity is a socially constructed value, it’s unlikely to be preserved by judges--or by private corporations--in the face of the expressed preferences of citizens who are less concerned about dignity than exposure. </p>
    <p>This is the subject of our third case, which involves a challenge that, in big and small ways, is confronting millions of people around the globe: how best to live our lives in a world where the Internet records everything and forgets nothing—where every online photo, status update, <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~topics.nytimes.com/top/news/business/companies/twitter/index.html?inline=nyt-org">Twitter</a> post and blog entry by and about us can be stored forever.<a href="#_ftn14" name="_ftnref14">[14]</a> Consider the case of Stacy Snyder. Four years ago, Snyder, then a 25-year-old teacher in training at Conestoga Valley High School in Lancaster, Pa., posted a photo on her <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~topics.nytimes.com/top/news/business/companies/myspace_com/index.html?inline=nyt-org">MySpace</a> page that showed her at a party wearing a pirate hat and drinking from a plastic cup, with the caption “Drunken Pirate.” After discovering the page, her supervisor at the high school told her the photo was “unprofessional,” and the dean of Millersville University School of Education, where Snyder was enrolled, said she was promoting drinking in virtual view of her under-age students. As a result, days before Snyder’s scheduled graduation, the university denied her a teaching degree. Snyder sued, arguing that the university had violated her First Amendment rights by penalizing her for her (perfectly legal) after-hours behavior. But in 2008, a federal district judge rejected the claim, saying that because Snyder was a public employee whose photo didn’t relate to matters of public concern, her “Drunken Pirate” post was not protected speech.<a href="#_ftn15" name="_ftnref15">[15]</a></p>
    <p>When historians of the future look back on the perils of the early digital age, Stacy Snyder may well be an icon. With Web sites like LOL Facebook Moments, which collects and shares embarrassing personal revelations from Facebook users, ill-advised photos and online chatter are coming back to haunt people months or years after the fact. </p>
    <p>Technological advances, of course, have often presented new threats to privacy. In 1890, in perhaps the most famous article on privacy ever written, Samuel Warren and Louis Brandeis complained that because of new technology — like the <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~topics.nytimes.com/top/news/business/companies/eastman_kodak_company/index.html?inline=nyt-org">Kodak</a> camera and the tabloid press — “gossip is no longer the resource of the idle and of the vicious but has become a trade.”<a href="#_ftn16" name="_ftnref16">[16]</a> But the mild society gossip of the Gilded Age pales before the volume of revelations contained in the photos, video and chatter on social-media sites and elsewhere across the Internet. Facebook, which surpassed MySpace in 2008 as the largest social-networking site, now has more than 500 million members, or 22 percent of all Internet users, who spend more than 500 billion minutes a month on the site. Facebook users share more than 25 billion pieces of content each month (including news stories, blog posts and photos), and the average user creates 70 pieces of content a month. </p>
    <p>Today, as in Brandeis’s day, the value threatened by gossip on the Internet – whether posted by us our by others – is dignity. (Brandeis called it an offense against honor.) But American law has never been good at regulating offenses against dignity – especially when regulations would clash with other values, such as protections for free speech. And indeed, the most ambitious proposals in Europe to create new legal rights to escape your past on the Internet are very hard to reconcile with the American free speech tradition. </p>
    <p>The cautionary tale here is Argentina, which has dramatically expanded the liability of search engines like Google and Yahoo for offensive photographs that harm someone’s reputation. Recently, an Argentinean judge held Google and Yahoo liable for causing “moral harm” and violating the privacy of Virginia Da Cunha, a pop star, by indexing pictures of her that were linked to erotic content. The ruling against Google and Yahoo was overturned on appeal in August, but there are at least 130 similar cases pending in Argentina to force search engines to remove or block offensive content. In the U.S., search engines are protected by the Communications Decency Act, which immunizes Internet service providers from hosting content posted by third parties. But as liability against search engines expands abroad, it will seriously curtain free speech:  Yahoo says that the only way to comply with injunctions about is to block all sites that refer to a particular plaintiff.<a href="#_ftn17" name="_ftnref17">[17]</a></p>
    <p>In Europe, recent proposals to create a legally enforceable right to escape your past have come from the French. The French data commissioner, Alex Turc, who has proposed a right to oblivion – namely a right to escape your past on the Internet. The details are fuzzy, but it appears that the proposal would rely on an international body – say a commission of forgetfulness – to evaluate particular take down requests and order Google and Facebook to remove content that, in the view of commissioners, violated an individuals’ dignitary rights. </p>
    <p>From an American perspective, the very intrusiveness of this proposal is enough to make it implausible: how could we rely on bureaucrats to protect our dignity in cases where we have failed to protect it on our own? Europeans, who have less of a free speech tradition and far more of a tradition of allowing people to remove photographs taken and posted against their will, will be more sympathetic to the proposal. But from the perspective of most American courts and companies, giving people the right selectively to delete their pasts from public discourse would pose unacceptably great threats to free speech. </p>
    <p>A far more promising solution to the problem of forgetting on the Internet is technological. And there are already small-scale privacy apps that offer disappearing data. An app called TigerText allows text-message senders to set a time limit from one minute to 30 days, after which the text disappears from the company’s servers, on which it is stored, and therefore, from the senders’ and recipients’ phones. (The founder of TigerText, Jeffrey Evans, has said he chose the name before the scandal involving <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~topics.nytimes.com/top/reference/timestopics/people/w/tiger_woods/index.html?inline=nyt-per">Tiger Woods</a>’s supposed texts to a mistress.)<a href="#_ftn18" name="_ftnref18">[18]</a></p>
    <p>Expiration dates could be implemented more broadly in various ways. Researchers at the <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~topics.nytimes.com/top/reference/timestopics/organizations/u/university_of_washington/index.html?inline=nyt-org">University of Washington</a>, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read. The technology doesn’t promise perfect control — you can’t stop someone from copying your photos or Facebook chats during the period in which they are not encrypted. But as Vanish improves, it could bring us much closer to a world where our data don’t linger forever.</p>
    <p>Facebook, if it wanted to, could implement expiration dates on its own platform, making our data disappear after, say, three days or three months unless a user specified that he wanted it to linger forever. It might be a more welcome option for Facebook to encourage the development of Vanish-style apps that would allow individual users who are concerned about privacy to make their own data disappear without imposing the default on all Facebook users.</p>
    <p>So far, however, Zuckerberg, Facebook’s C.E.O., has been moving in the opposite direction — toward transparency, rather than privacy. In defending Facebook’s recent decision to make the default for profile information about friends and relationship status public, Zuckerberg told the founder of the publication TechCrunch that Facebook had an obligation to reflect “current social norms” that favored exposure over privacy. “People have really gotten comfortable not only sharing more information and different kinds but more openly and with more people, and that social norm is just something that has evolved over time,” <a href="#_ftn19" name="_ftnref19">[19]</a> he said.</p>
    <p>It’s true that a German company, X-Pire, recently announced the launch of a Facebook app that will allow users automatically to erase designated photos. Using electronic keys that expire after short periods of time, and obtained by solving a Captcha, or graphic that requires users to type in a fixed number combinations, the application ensures that once the time stamp on the photo has expired, the key disappears.<a href="#_ftn20" name="_ftnref20">[20]</a> X-Pire is a model for a sensible, blob-machine-like solution to the problem of digital forgetting. But unless Facebook builds X-Pire-like apps into its platform – an unlikely outcome given its commercial interests – a majority of Facebook users are unlikely to seek out disappearing data options until it’s too late. X-Pire, therefore, may remain for the foreseeable future a technological solution to a grave privacy problem—but a solution that doesn’t have an obvious market. </p>
    <p>The courts, in my view, are better equipped to regulate offenses against autonomy, such as 24/7 surveillance on Facebook, than offenses against dignity, such as drunken Facebook pictures that never go away. But that regulation in both cases will likely turn on evolving social norms whose contours in twenty years are hard to predict. </p>
    <p>Finally, let’s consider one last example of the challenge of preserving constitutional values in the age of Facebook and Google, an example that concerns not privacy but free speech.<a href="#_ftn21" name="_ftnref21">[21]</a> </p>
    <p>At the moment, the person who arguably has more power than any other to determine who may speak and who may be heard around the globe isn’t a king, president or Supreme Court justice. She is Nicole Wong, the deputy general counsel of Google, and her colleagues call her “The Decider.” It is Wong who decides what controversial user-generated content goes down or stays up on YouTube and other applications owned by Google, including Blogger, the blog site; Picasa, the photo-sharing site; and Orkut, the social networking site. Wong and her colleagues also oversee Google’s search engine: they decide what controversial material does and doesn’t appear on the local search engines that Google maintains in many countries in the world, as well as on Google.com. As a result, Wong and her colleagues arguably have more influence over the contours of online expression than anyone else on the planet.</p>
    <p>At the moment, Wong seems to be exercising that responsibility with sensitivity to the values of free speech. Google and Yahoo can be held liable outside the United States for indexing or directing users to content after having been notified that it was illegal in a foreign country. In the United States, by contrast, Internet service providers are protected from most lawsuits involving having hosted or linked to illegal user-generated content. As a consequence of these differing standards, Google has considerably less flexibility overseas than it does in the United States about content on its sites, and its “information must be free” ethos is being tested abroad.</p>
    <p>For example, on the German and French default Google search engines, Google.de and Google.fr, you can’t find Holocaust-denial sites that can be found on Google.com, because Holocaust denial is illegal in Germany and France. Broadly, Google has decided to comply with governmental requests to take down links on its national search engines to material that clearly violates national laws. But not every overseas case presents a clear violation of national law. In 2006, for example, protesters at a Google office in India demanded the removal of content on Orkut, the social networking site, that criticized Shiv Sena, a hard-line Hindu political party popular in Mumbai. Wong eventually decided to take down an Orkut group dedicated to attacking Shivaji, revered as a deity by the Shiv Sena Party, because it violated Orkut terms of service by criticizing a religion, but she decided not to take down another group because it merely criticized a political party. “If stuff is clearly illegal, we take that down, but if it’s on the edge, you might push a country a little bit,” Wong told me. “Free-speech law is always built on the edge, and in each country, the question is: Can you define what the edge is?”</p>
    <p>Over the past couple of years, Google and its various applications have been blocked, to different degrees, by 24 countries. Blogger is blocked in Pakistan, for example, and Orkut in Saudi Arabia. Meanwhile, governments are increasingly pressuring telecom companies like <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~topics.nytimes.com/top/news/business/companies/comcast_corporation/index.html?inline=nyt-org">Comcast</a> and <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~topics.nytimes.com/top/news/business/companies/verizon_communications_inc/index.html?inline=nyt-org">Verizon</a> to block controversial speech at the network level. Europe and the U.S. recently agreed to require Internet service providers to identify and block child pornography, and in Europe there are growing demands for network-wide blocking of terrorist-incitement videos. As a result, Wong and her colleagues worry that Google’s ability to make case-by-case decisions about what links and videos are accessible through Google’s sites may be slowly circumvented, as countries are requiring the companies that give us access to the Internet to build top-down censorship into the network pipes.</p>
    <p>It is not only foreign countries that are eager to restrict speech on Google and YouTube. In May, 2006, Joseph Lieberman who has become the A. Mitchell Palmer of the digital age, had his staff contacted Google and demanded that the company remove from YouTube dozens of what he described as jihadist videos. After viewing the videos one by one, Wong and her colleagues removed some of the videos but refused to remove those that they decided didn’t violate YouTube guidelines. Lieberman wasn’t satisfied. In an angry follow-up letter to Eric Schmidt, the C.E.O. of Google, Lieberman demanded that all content he characterized as being “produced by Islamist terrorist organizations” be immediately removed from YouTube as a matter of corporate judgment — even videos that didn’t feature hate speech or violent content or violate U.S. law. Wong and her colleagues responded by saying, “YouTube encourages free speech and defends everyone’s right to express unpopular points of view.” Recently, Google and YouTube announced new guidelines prohibiting videos “intended to incite violence.”</p>
    <p>That category scrupulously tracks the Supreme Court’s rigorous First Amendment doctrine, which says that speech can be banned only when it poses an imminent threat of producing serious lawless action. Unfortunately, Wong and her colleagues recently retreated from that bright line under further pressure from Lieberman. In November, 2010, YouTube added a new category that viewers can click to flag videos for removal: “promotes terrorism.” There are 24 hours of video uploaded on YouTube every minute, and a series of categories viewers can use to request removal, including “violent or repulsive content” or inappropriate sexual content. Although hailed by Senator Lieberman, the new “promotes terrorism category” is potentially troubling because it goes beyond the narrow test of incitement to violence that YouTube had previously used to flag terrorism related videos for removal. YouTube’s capitulation to Lieberman shows that a user generated system for enforcing community standards will never protect speech as scrupulously as unelected judges enforcing strict rules about when speech can be viewed as a form of dangerous conduct. </p>
    <p>Google remains a better guardian for free speech than internet companies like Facebook and Twitter, which have refused to join the Global Network Initiative, an industry-wide coalition committed to upholding free speech and privacy. But the recent capitulation of YouTube shows that Google’s “trust us” model may not be a stable way of protecting free speech in the twenty-first century, even though the alternatives to trusting Google – such as authorizing national regulatory bodies around the globe to request the removal of controversial videos – might protect less speech than Google’s “Decider” model currently does. </p>
    <p>I’d like to conclude by stressing the complexity of protecting constitutional values like privacy and free speech in the age of Google and Facebook, which are not formally constrained by the Constitution. In each of my examples – 24/7 Facebook surveillance, blob machines, escaping your Facebook past, and promoting free speech on YouTube and Google -- it’s possible to imagine a rule or technology that would protect free speech and privacy, while also preserving security—a blob-machine like solution. But in some areas, those blob-machine-like solutions are more likely, in practice, to be adopted then others. Engaged minorities may demand blob machines when they personally experience their own privacy being violated; but they may be less likely to rise up against the slow expansion of surveillance cameras, which transform expectations of privacy in public. Judges in the American system may be more likely to resist ubiquitous surveillance in the name of <i>Roe v. Wade</i>-style autonomy than they are to create a legal right to allow people to edit their Internet pasts, which relies on ideas of dignity that in turn require a social consensus that in America, at least, does not exist. As for free speech, it is being anxiously guarded for the moment by Google, but the tremendous pressures, from consumers and government are already making it hard to hold the line at removing only speech that threatens imminent lawless action. </p>In translating constitutional values in light of new technologies, it’s always useful to ask: What would Brandeis do? Brandeis would never have tolerated unpragmatic abstractions, which have the effect of giving citizens less privacy in the age of cloud computing than they had during the founding era. In translating the Constitution into the challenges of our time, Brandeis would have considered it a duty actively to engage in the project of constitutional translation in order to preserve the Framers’ values in a startlingly different technological world. But the task of translating constitutional values can’t be left to judges alone: it also falls to regulators, legislators, technologists, and, ultimately, to politically engaged citizens. As Brandeis put it, “If we would guide by the light of reason, we must let our minds be bold.” <div>
<br clear="all"><hr align="left" width="33%"><div id="ftn1"><p><a href="#_ftnref1" name="_ftn1">[1]</a> See <i>Florida v. Riley</i>, 488 U.S. 445 (1989) (O’Connor, J., concurring). 
<br><a href="#_ftnref2" name="_ftn2">[2]</a> See <i>United States v. Miller</i>, 425 U.S. 435 (1976).
<br><a href="#_ftnref3" name="_ftn3">[3]</a> See <i>United States v. Knotts</i>, 460 U.S. 276, 283-4 (1983). 
<br><a href="#_ftnref4" name="_ftn4">[4]</a> See <i>United States v. Pineda-Morena</i>, 591 F.3d 1212 (9th Cir. 2010); <i>United States v. Garcia</i>, 474 F.3d 994 (7<sup>th</sup> Cir. 2007); <i>United States v. Marquez</i>, 605 F.3d 604 (8<sup>th</sup> Cir. 2010). 
<br><a href="#_ftnref5" name="_ftn5">[5]</a> See <i>United States v. Maynard</i>, 615 F.3d 544 (D.C. Cir 2010). 
<br><a href="#_ftnref6" name="_ftn6">[6]</a> 615 F.3d at 558.  
<br><a href="#_ftnref7" name="_ftn7">[7]</a> Id. at 562.
<br><a href="#_ftnref8" name="_ftn8">[8]</a> See Declan McCullagh, “Senator Pushes for Mobile Privacy Reform,” <i>CNet News</i>, March 22, 2011, available at <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~m.news.com/2166-12_3-20045723-281.html">http://m.news.com/2166-12_3-20045723-281.html</a> 
<br><a href="#_ftnref9" name="_ftn9">[9]</a> <i>Lawrence v. Texas</i>, 539 U.S. 558, 562 (2003). 
<br><a href="#_ftnref10" name="_ftn10">[10]</a> The discussion of the blob machines is adapted from “Nude Breach,” <i>New Republic</i>, December 13, 2010. 
<br><a href="#_ftnref11" name="_ftn11">[11]</a> <i>United States v. Place</i>, 462 U.S. 696 (1983). 
<br><a href="#_ftnref12" name="_ftn12">[12]</a> <i>U.S. v. Davis</i>, 482 F.2d 893, 913 (9th Cir. 1973).
<br><a href="#_ftnref13" name="_ftn13">[13]</a> <i>Safford Unified School District v. Redding</i>, 557 U.S. ___ (2009). 
<br><a href="#_ftnref14" name="_ftn14">[14]</a> The discussion of digital forgetting is adapted from “The End of Forgetting,” <i>New York Times Magazine</i>, July 25, 2010. 
<br><a href="#_ftnref15" name="_ftn15">[15]</a><i>Snyder v. Millersville University</i>, No. 07-1660 (E.D. Pa. Dec. 3, 2008). 
<br><a href="#_ftnref16" name="_ftn16">[16]</a> Brandeis and Warren, “The Right to Privacy,” 4 Harv. L. Rev. 193 (1890).
<br><a href="#_ftnref17" name="_ftn17">[17]</a> Vinod Sreeharsha, Google and Yahoo Win Appeal in Argentine Case, <i>N.Y.  Times</i>, August 20, 2010, B4.
<br><a href="#_ftnref18" name="_ftn18">[18]</a> See Belinda Luscombe, “Tiger Text: An iPhone App for Cheating Spouses?”, <i>Time.com</i>, Feb. 26, 2010, available at <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.time.com/time/business/article/0,8599,1968233,00.html">http://www.time.com/time/business/article/0,8599,1968233,00.html</a> 
<br><a href="#_ftnref19" name="_ftn19">[19]</a>Marshall Kirkpatrick, “Facebook’s Zuckerbeg Says the Age of Privacy Is Over,” <i>ReadWriteWeb.com</i>, January 9, 2010, available at <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.readwriteweb.com/archives/facebooks_zuckerberg_says_the_age_of_privacy_is_ov.php">http://www.readwriteweb.com/archives/facebooks_zuckerberg_says_the_age_of_privacy_is_ov.php</a> 
<br><a href="#_ftnref20" name="_ftn20">[20]</a> Aemon Malone, “X-Pire Aims to Cut down on Photo D-Tagging on Facebook,” <i>Digital Trends.com</i>, January 17, 2011, available at <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.digitaltrends.com/social-media/x-pire-adds-expiration-date-to-digital-photos/">http://www.digitaltrends.com/social-media/x-pire-adds-expiration-date-to-digital-photos/</a> 
<br><a href="#_ftnref21" name="_ftn21">[21]</a> The discussion of free speech that follows is adapted from “Google’s Gatekeepers,” <i>New York Times Magazine</i>, November 30, 2008.</p></div></div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2011/5/02-free-speech-rosen/0502_free_speech_rosen.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/experts/rosenj?view=bio">Jeffrey Rosen</a></li>
		</ul>
	</div><div>
		Image Source: David Malan
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487888/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487888/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fs%2fsk%2520so%2fsocial_connections001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487888/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2011/04/19-surveillance-laws-kerr?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{B01CED73-290C-4078-9763-CA0BC42FF055}</guid><link>http://webfeeds.brookings.edu/~/65487889/0/brookingsrss/series/futureoftheconstitution~Use-Restrictions-and-the-Future-of-Surveillance-Law</link><title>Use Restrictions and the Future of Surveillance Law </title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/d/da%20de/dc_metro_001_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>Introduction</b></p><p><p>The year 2030 was the year of the subway terror attack threat.  As far back as the 2004 Madrid subway bombing, terrorists had seen how a single modest subway attack could wreak havoc on a busy city center.  Sporadic attacks continued in the first three decades of the 21st century, including unsuccessful attacks on the New York subway in 2018 and the Washington, DC, Metro system in 2023.</p>
    <p>But 2030 changed everything.  On January 1, 2030, Abdullah Omar, the leader of the Brotherhood, the reincarnation of the earlier Al Qaeda network, made an ominous announcement.</p>
    <p>The Brotherhood had a dozen sleeper cells in the United States, Omar announced.  In 2030, he would activate the cells.  The cells would launch terror attacks on the transit systems of each of five major cities.  Each system would be hit twice, and a few would be hit more.  Omar threatened that each attack would come with a “surprise.”</p>
    <p>Omar named the transit systems:  The New York City Subway, the Washington Metro, the Chicago “L,” the San Francisco BART system, and the Boston “T.”   Each system would be attacked during the year unless the United States agreed to withdraw all support from the state of Israel. </p>
    <p>Some critics dismissed the threat as posturing.  Others doubted the Brotherhood could pull it off. </p>
    <p>But in classified briefings, the Director of National Intelligence told President Booker that he thought the threat was extremely real.  Omar’s promised “surprise” was likely some kind of biological attack.  Some attacks might fail.  But others could work.  The overall damage to life and to the economy amounted to a grave national threat, he explained, and the threat demanded a thorough response.</p>
    <p>President Booker agreed.  He set up a Commission to advise him on how to respond.  The Commission, consisting of top intelligence and national security officials, recommended establishing a new federal surveillance system.  The system would be known formally as the “Minding Our National-Interest Transit or Rail” program.  It would be known informally by its acronym: MONITOR.</p>
    <p>MONITOR worked by requiring all subway passengers to use a MONITOR card when they entered subway systems.  Each card was activated by its owner’s fingerprints.   The fingerprints identified the user and kept records of where the user had entered and where the user existed the system.   </p>
    <p>The Department of Homeland Security administered the MONITOR system out of a central office in downtown Washington, DC.  MONITOR’s computers kept records of every entry into and exit from the subway, and that information would be fed into the government’s database in its central office.</p>
    <p>The system assigned each subway rider one of three colors.  The first color was green, which meant that the rider was authorized to ride the subway. The second color was yellow, which meant that the user was a “person of interest” that the government wanted to follow (such as someone on a terrorist watchlist).  Yellow riders were allowed to enter the subway, but their progress was flagged by the MONITOR computers.  The third color was red.  Riders assigned red were not allowed to enter the subway system at all. </p>
    <p>MONITOR was up and running by late February, and it ran through the end of the year.  By most accounts, it was a mixed success.   Its most celebrated use was identifying a terror cell known as the “South Loop Seven.” </p>
    <p>The South Loop Seven was a group of seven young Muslim men who attempted to enter the Chicago “L” system within minutes of each other.  Four of the seven men had been flagged as yellow because they were on a terrorist watch list. The entrance of all four yellow-marked riders into the same station in a short period triggered an immediate response from Homeland Security.  </p>
    <p>The four men were found minutes later with bomb-related materials in knapsacks.  The “L” trains were shut down immediately.  A search of the station yielded the three other cell members, each of whom also had bomb materials in packages he was carrying.  </p>
    <p>To many observers, MONITOR’s success in stopping the South Loop Seven justified the entire program.  But other uses of MONITOR proved more controversial.</p>
    <p>For example, MONITOR’s access to a fingerprint database drew the attention of the FBI.  The FBI sought to use the fingerprint database to crack unsolved crimes.  MONITOR had not been intended to be used for criminal investigations, but President Booker eventually allowed MONITOR’s data to be provided to the FBI with the proviso that it be used to solve only serious crimes.  Hundreds of crimes were solved.  Some of those crimes were serious, including murder and rape.  Others were decidedly less serious, ranging from mail fraud to tax evasion.</p>
    <p> Abuses occurred, as well. For example, a few employees of the Department of Homeland Security were caught using MONITOR for personal reasons.  One employee used the data to keep tabs on his wife, whom he suspected of having an affair. The employee flagged his wife’s account yellow so he could watch her coming and going through the DC metro system.  </p>
    <p>In another case, an employee of Homeland Security lost a laptop computer that included a MONITOR database containing millions of datasets of fingerprints.  The computer was never recovered.  No one knows if it was destroyed, or if the information eventually made it into the hands of criminals or even foreign governments.  <br><br><strong>The Lessons of MONITOR<i></i></strong></p>
    <p class="bodytextfirstpar">What are the lessons of MONITOR?  In my view, MONITOR calls for a shift in our thinking about surveillance.  In the past, the law has tried to regulate surveillance mostly by focusing on whether data can be created.  The focus has been on the first stage of surveillance systems, the collection of data. </p>
    <p>That must change.  Computer surveillance uses widespread collection and analysis of less intrusive information to yield clues normally observable only through the collection of more intrusive information. To achieve those benefits, the law will need to allow relatively widespread collection of data but then give greater emphasis and attention to its use and disclosure.   </p>
    <p>In short, the future of surveillance calls for a shift in the legal system’s focus not merely a shift in <i>how</i> to regulate but a shift was well in <i>what</i> to regulate.  Instead of focusing solely on the initial collection of information, we need to distribute regulation along the entire spectrum of the surveillance process. The future of surveillance is a future of use restrictions － rules that strictly regulate what the government can do with information it has collected and processed.   </p>
    <p>Of course, the law should still regulate the collection of evidence.  But surveillance law shouldn’t end there.  The shift to computerization requires renewed attention on regulating the use and disclosure of information, not just its collection.  To see why, we need to understand the computerization shift and the stages of surveillance law.  We can then see how use restrictions would be the key to protecting privacy while ensuring security in the case of the MONITOR system.<br><br><strong>The Computerization Shift</strong></p>
    <p>In the past, information ordinarily was collected and shared using the human senses.  We generally knew what we knew because we had either seen it directly or heard it from someone else. Knowledge was based entirely on personal observation. If you wanted to know what was happening, you had to go out and take a look. You had to see what was happening and observe it with your own eyes, or at least speak to those who had done so to get a second-hand account. The human senses regulated everything. </p>
    <p>In that world, surveillance systems were simple.  The “system” was really just a person.  The person would listen or watch.  If he saw something notable, he would tell others about it. </p>
    <p>Computers change everything.  More and more, our daily lives are assisted by and occur through computers.  Computer networks are extraordinary tools for doing remotely what we used to have to do in person. We wake up in the morning, and use the network to send and receive messages. We make our purchases online, using the network to select and order goods.  Instead of hiring a person to watch our property, we use cameras to record what goes on in open places and to keep records of what occurred.  All of these routine steps are facilitated by computers and computer networks. </p>
    <p>The switch from people to computers means that knowing what's happening requires collecting and analyzing data from the networks themselves. The network contains information zipping around the world, and the only way to know what is happening is to analyze it. Specifically, some device must collect the information, and some device must manipulate it. The information must then go from the computer to a person, and in some cases, from a person to the public.  The result is a substitution effect: Work that used to be done entirely by the human senses now must be done in part by tools.</p>
    <p>The shift to computerization complicates the process of surveillance.  In a world of human surveillance, a system of surveillance was one step: The human would observe the information and then disclose it to others.  Computers add a few steps in the process in a critical way.   Instead, of one step, there are now four steps:  Computer collection, computer processing, human disclosure, and public disclosure.   To see how computers change the way the law should regulate surveillance, we need to focus on those four steps.  <br><br><strong>The Four Stages of Computer Surveillance </strong></p>
    <p>The shift to computerization has profound consequences for how we think about surveillance law.  There are now four basic stages of computer-based surveillance systems:  1) data collection, 2) data manipulation by a machine, 3) human disclosure, and 4) public disclosure.  A threshold problem faced by any system of surveillance law is which of these steps – or which combination of them – should be the focal points of legal regulation.  For example, should the law focus on regulating the initial collection of information, leaving the downstream questions of processing and use unregulated?  Alternatively, should the law allow broad initial collection, and then more carefully restrict human access or eventual use and disclosure?  </p>
    <p>
      <i>Evidence Collection</i> </p>
    <p class="bodytextfirstpar">The first stage of any government surveillance program is evidence collection.  The idea here is simple; surveillance requires access to and copying of information.  Evidence collection can occur in many different ways.  It may occur through use of a device such as a “bug” or a wiretapping program.  Alternatively, the government may obtain a copy of data collected elsewhere such as from a private third-party provider.  The evidence may be stored in any form.  Although electronic forms are most common today, it could be on paper or on a magnetic tape or some other mechanism.  In all of these cases, the government comes into possession of its own copy of the information. </p>
    <p>The rationale for regulating evidence collection is obvious: The government cannot misuse evidence if it does not have it in the first place.  Conversely, permitting evidence collection and only regulating subsequent use or disclosure can permit governmental abuses. </p>
    <p>
      <i>Data Manipulation by Machine</i> </p>
    <p class="bodytextfirstpar">Data manipulation by a machine provides the next stage of surveillance systems. At this stage, the government has the data in its possession, and it now wants to manipulate the information to achieve particular goals.  Perhaps the government wants to aggregate the information into a database.  Perhaps the government wants to aggregate the information and then “mine” the data for trends or clues that might signal a likelihood of criminal or terrorist activity. </p>
    <p>Or perhaps the government wants to combine two databases, adding information developed for one agency or one reason with information developed for another agency or reason.  Whatever the goals, we can assume at this stage that no human being accesses the information or the results of any analysis of it.  The collected information exists but is not viewed by any person. </p>
    <p>
      <i>Disclosure to a Person inside the Program</i> </p>
    <p>The third stage of a surveillance system is disclosure to a person who is a part of the surveillance program.  At this stage, an individual with proper access to the database receives the fruits of the data collection and manipulation.  </p>
    <p>For example, an IRS employee tasked with reviewing tax information may enter queries into a database of tax filings. A police officer who has just pulled over a driver for speeding may query a database of driving records to determine if the driver has received speeding tickets in the past.  A keyword search through a database of seized e-mails may reveal positive “hits” where the keyword appeared.   In all of these cases, information from or about the database is disclosed to a government employee with proper access rights to the database.</p>
    <p>This stage of surveillance systems raises privacy concerns because it involves human access to sensitive information, and human access is a necessary step to abuse.  Unlike stage two, data manipulation, stage three envisions giving government employees access to often very private data.  Access creates the possibility of abuse, triggering privacy concerns beyond stage two. </p>
    <p>
      <i>Public Disclosure </i>
    </p>
    <p>The fourth and final stage is disclosure outside the government. At this stage, the information collected and analyzed by the government is actually disclosed or used outside the agency. </p>
    <p>For example, the government might seek to use the fruits of wiretapping in a criminal case, and therefore might disclose the private phone calls in open court for the jury to see.  A government insider might seek to embarrass someone else, and might leak private information about that person from a government database to the press.  In some cases, the government will disclose the information pursuant to a formal request, such as a request under the Freedom of Information Act. In all of these examples, information collected by the government is disclosed to members of the public.  </p>
    <p>Outside disclosure can occur in different forms.  In some cases, the disclosure will be direct: a government official with control over information will communicate the information explicitly, authenticating the information disclosed.  This would be the case when the government seeks to disclose the fruits of wiretapping for use in a criminal case.  </p>
    <p>In many cases, however, the disclosure will be indirect.  If government data-mining of collected call records leads officials to determine that they have identified a terrorist cell, they might respond by sweeping in and arresting the members of the cell.  The fact of the arrest does not actually disclose the data collected or metadata obtained: however, the arrests might be used to help piece together the government’s surveillance.  The information isn’t disclosed, but actions based on the information may be public and in some cases will resemble a direct disclosure in substance if not in form. <br><br><strong>The Old Law of Surveillance</strong></p>
    <p>In the past, the law of surveillance has focused primarily on the first stage of surveillance systems, the initial collection of evidence. The Fourth Amendment’s prohibition on unreasonable searches and seizures regulates access to information, not downstream use.  If the government comes across information legally, then it is free to use that information however officials would like.  </p>
    <p>The reasons for this focus are largely historical.  The Fourth Amendment was enacted to limit the government’s ability to break into homes and other private spaces in order to take away private property.  Breaking into the home was a search.  Taking away property was a seizure.  As a result, the Fourth Amendment was designed to focus on the initial invasion of privacy – the initial entrance into private spaces – and the retrieval of what the government observed.  Once property or information is exposed and retrieved, the work of the Fourth Amendment is done.</p>
    <p>The statutory Wiretap Act has a similar focus.  The Wiretap Act’s most important prohibition is the “intercept” of data without a warrant or an applicable exception.  Intercept is defined as “acquisition” of the contents of the data, which means that the Wiretap Act regulates the initial stage of evidence collection.  Surveillance laws such as the Stored Communications Act also monitor the initial government acquisition of data; the laws focus on regulating when the government can obtain data, rather than what the government does once the information has been obtained.  </p>
    <p>In contrast, the later stages have received little attention by privacy laws.  The law mostly focuses on the collection of evidence: Relatively little attention is placed on what happens afterwards.  </p>
    <p>Exceptions exist. For example, information in tax returns filed with the IRS generally stays with the IRS; the FBI is not normally given access to that information for criminal investigations.  Similarly, information obtained by a grand jury pursuant to grand jury authorities can only be disclosed within the government to those working on the criminal investigation. The basic idea is that the government is a “they” not an “it,” and limiting data sharing is essentially the same as limiting data collection for individual groups and institutions with different roles within the government.</p>
    <p>But these laws are the exception, not the rule. For the most part, the law of surveillance has focused on how evidence is collected, rather than how it has been processed, used, and disclosed.  <br><br><strong>The Case for Use Restrictions</strong></p>
    <p>The new forms of computer surveillance should change that. The benefits of computer surveillance is that they can process information quickly and inexpensively to learn what would have been unknowable.  Assembling and processing information may lead to plausible conclusions that are far more far-reaching than the information left separate.   If so, data manipulation can have an amplifying effect, turning low impact information in isolation into high impact information when processed. </p>
    <p>Reaping these benefits requires surveillance systems that allow the initial collection and processing.  To reap those benefits, the best way to design surveillance systems is to allow the initial collection but then place sharp limits on the later stages such as disclosure.</p>
    <p>Of course, choosing where to regulate requires balancing competing concerns in minimizing disclosure risks and maximizing the effectiveness of the surveillance system.  The proper balance will depend on the interests involved.  A database designed to identify terrorists will have a very different government interest from a database designed to identify suspects likely to possess marijuana. A database containing the contents of phone calls is very different from a database containing only the numbers dialed without the contents.  Given the diversity of interests and privacy concerns, it is clear that different surveillance regimes will use different regulatory points in different proportions.</p>
    <p>As a general rule, however, the shift to electronic surveillance systems requires a shift in emphasis from regulating the early stages of surveillance to regulating the later stages of surveillance.  </p>
    <p>In a traditional surveillance system, such as those before the advent of computers, the primary legal regulation sensibly focused on the early stages of surveillance. The shift to computerized systems and the future of low-cost surveillance methods will shift the emphasis to the later stages, and in particular the final stage of public disclosure. </p>
    <p>The advantages of computer surveillance follow from their ability to yield important information through widespread collection and manipulation of generally less intrusive data.  That is, computer surveillance and modern camera surveillance tend to work by gathering more information that is less invasive per datum, and then manipulating it through electronic methods to yield important information that normally would be obtainable only through more invasive surveillance techniques. </p>
    <p>In some cases, such computer and high technology camera surveillance will be unable to yield serious benefits: Such surveillance should be discontinued for the simple reason that it is not effective.  Where it is effective, and the public need great enough,  the best way to achieve the benefits of surveillance while minimizing the threat to privacy is through use and disclosure limitations. Use and disclosure limitations will allow surveillance regimes to achieve the potential benefits of computer surveillance – the ability to reach conclusions from the collection and analysis of low-intrusive information that is akin to that traditionally achieved only through collection and analysis of high-intrusive information – while avoiding to the extent possible the privacy harms that accompany such surveillance.  </p>
    <p>The best way to achieve the benefits of computer surveillance while minimizing the privacy risks is to place greater focus on the later regulatory stages, and in particular, the final stage of public disclosure.   If computer surveillance is likely to be effective, genuinely achieving a significant public good, widespread collection and analysis is necessary to achieve those benefits.  The law should respond by adding new protections to the output end of the regulatory stage: The law should allow the collection and manipulation of data, but then place significant limits on the use and disclosure of the information.<br><br><strong>Use Restrictions and the MONITOR Program</strong></p>
    <p>We can see how use restrictions can lead to best balance between security and privacy by returning to the MONITOR program of 2030.  In that example, the public need was great.  The threat was real.  Plus, the system was designed to have the capacity to detect threats that could then be stopped.  Some sort of monitoring program was necessary.</p>
    <p>The mixed success of the MONITOR program was due to its mixed uses.  MONITOR was used properly when it led to the capture of the South Loop Seven. This was the kind of use that its designers had in mind, and that most readers will applaud.  </p>
    <p>On the other hand, MONITOR was not created with clear limitations on its use.  In particular, the example left open whether the information collected by MONITOR could be used to solve crimes.  This presents a slippery slope. Once the information is created, there will be pressures to use it for a wider and wider range of government interests and a broader range of crimes.  </p>
    <p>Opinions will differ on where lines should be drawn.  However, clear use limitations could avoid the slippery slope altogether.  A clear rule that MONITOR information could not be disclosed to criminal investigators under any circumstances could minimize the risk that MONITOR information could be used for less and less serious government interests.  </p>
    <p>The other uses of MONITOR were more obvious abuses.   Employees misused the data for personal reasons instead of official ones.  Data was disclosed inadvertently when an employee lost a laptop.   Here the law should impose strict limitations on use and disclosure and ensure that they are enforceable.  Data security is paramount, and remedies for violations should be harsh.  </p>
    <p>The broad lesson of MONITOR is that lawmakers should focus as much or more on the back end of surveillance systems as than the front end.  If computerized surveillance systems can achieve critical public benefits that make them worthwhile, the emphasis should shift from whether the information can be collected to the legal limitations on how it is processed, used, and disclosed.  The shift to computerization adds new steps, and the law must adjust to regulate them.<br><br><strong>Courts or Congress?</strong></p>
    <p>The final question is what branch of government will create the use restrictions I have in mind.  Can courts do this in the name of the Fourth Amendment?  Or is it up to Congress?</p>
    <p>In my view, it is up to Congress.  The Fourth Amendment prohibits unreasonable searches and seizures.   Use limitations are neither searches nor seizures, however. They are restrictions on what the government can do with information <i>after</i> it has searched for and seized it.  As a result, there is little in the way of Fourth Amendment text, history,or precedent that supports recognizing use restrictions as part of Fourth Amendment protections. </p>
    <p>Granted, it is possible to creatively re-imagine Fourth Amendment law in ways that recognize use restrictions.  As far back as 1995, Harold Krent made such an argument.<a href="#_ftn1" name="_ftnref1">[1]</a>   Professor Krent reasoned that obtaining information is a seizure, and that the subsequent use of the information – including downstream disclosures of it – could make the seizure “unreasonable.”  In other words, instead of saying that searches and seizures occur at a specific time, they could be deemed to occur over a period of time.  All uses of information would have to be reasonable,   and courts could distinguish acceptable uses of information from unacceptable ones by saying that the former were reasonable and the latter were unreasonable. </p>
    <p>The argument is creative, but I think it is too far a stretch from existing doctrine to expect courts to adopt it.  In my view, there are two basic problems.  First, most of the information collected by the Government is not protected under current Fourth Amendment law.  Collecting third-party records is neither a search nor a seizure (which is why it is frequently collected; information that is protected by the Fourth Amendment is collected only rarely).  Under Professor Krent’s proposal, however, presumably we would need to overhaul that doctrine to make all evidence collection a seizure to enable courts to then pass on the reasonableness of the seizure.  If we took that step, however, we would need an entirely new doctrine on when seizures are reasonable, quite apart from downstream uses.  This would require a fairly dramatic overhaul of existing Fourth Amendment law, all to enable use restrictions. (For a vision of such a dramatic overhaul, consider <a href="http://www.brookings.edu/~/media/Files/rc/papers/2010/1208_4th_amendment_slobogin/1208_4th_amendment_slobogin.pdf">Christopher Slobogin’s paper in this series</a>.)</p>
    <p>Second, disclosures of information come in so many shapes and sizes that courts would have little basis on which to distinguish reasonable from unreasonable uses.   Every database is different, every data point is different, and every disclosure is different.  The kind of fine-grained reasonableness inquiry called for by Fourth Amendment law would leave judges with few clear guide-posts to distinguish uses that violate the Fourth Amendment from those that don’t with no historical precedent to follow.  For both of these reasons, recognizing use restrictions in Fourth Amendment law may create more problems than it solves.  At the very least, we should not expect courts to take such a leap any time soon.</p>In contrast, legislatures are well-equipped to enact use restrictions.  They can promulgate bright-line rules concerning information collected under specific government powers, and they can explain the scope of the limitation and the contexts in which it is triggered.   Further, they can legislate use restrictions at the same time they enact the statutes authorizing the evidence collection.  That way, use restrictions can be a part of the original statutory design, rather than something imposed years later by the courts. <div><br clear="all"><hr align="left" width="33%"><div id="ftn1"><p><a href="#_ftnref1" name="_ftn1">[1]</a> Harold J. Krent, Of Diaries and Data Banks: Use Restrictions Under the Fourth Amendment, 74 Texas Law Review 49 (1995).</p></div></div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2011/4/19-surveillance-laws-kerr/0419_surveillance_law_kerr.pdf">Download the Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Orin S. Kerr</li>
		</ul>
	</div><div>
		Image Source: Paul Edmondson
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487889/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fd%2fda%2520de%2fdc_metro_001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Tue, 19 Apr 2011 11:37:00 -0400</pubDate><dc:creator>Orin S. Kerr</dc:creator>
<itunes:summary> 
Introduction
The year 2030 was the year of the subway terror attack threat.&#xA0; As far back as the 2004 Madrid subway bombing, terrorists had seen how a single modest subway attack could wreak havoc on a busy city center. &#xA0;Sporadic attacks continued in the first three decades of the 21st century, including unsuccessful attacks on the New York subway in 2018 and the Washington, DC, Metro system in 2023. 
But 2030 changed everything.&#xA0; On January 1, 2030, Abdullah Omar, the leader of the Brotherhood, the reincarnation of the earlier Al Qaeda network, made an ominous announcement. 
The Brotherhood had a dozen sleeper cells in the United States, Omar announced.&#xA0; In 2030, he would activate the cells.&#xA0; The cells would launch terror attacks on the transit systems of each of five major cities.&#xA0; Each system would be hit twice, and a few would be hit more.&#xA0; Omar threatened that each attack would come with a &#8220;surprise.&#8221; 
Omar named the transit systems:&#xA0; The New York City Subway, the Washington Metro, the Chicago &#8220;L,&#8221; the San Francisco BART system, and the Boston &#8220;T.&#8221;&#xA0;&#xA0; Each system would be attacked during the year unless the United States agreed to withdraw all support from the state of Israel. 
Some critics dismissed the threat as posturing.&#xA0; Others doubted the Brotherhood could pull it off. 
But in classified briefings, the Director of National Intelligence told President Booker that he thought the threat was extremely real.&#xA0; Omar&#x2019;s promised &#8220;surprise&#8221; was likely some kind of biological attack.&#xA0; Some attacks might fail.&#xA0; But others could work.&#xA0; The overall damage to life and to the economy amounted to a grave national threat, he explained, and the threat demanded a thorough response. 
President Booker agreed.&#xA0; He set up a Commission to advise him on how to respond.&#xA0; The Commission, consisting of top intelligence and national security officials, recommended establishing a new federal surveillance system.&#xA0; The system would be known formally as the &#8220;Minding Our National-Interest Transit or Rail&#8221; program.&#xA0; It would be known informally by its acronym: MONITOR. 
MONITOR worked by requiring all subway passengers to use a MONITOR card when they entered subway systems.&#xA0; Each card was activated by its owner&#x2019;s fingerprints.&#xA0;&#xA0; The fingerprints identified the user and kept records of where the user had entered and where the user existed the system.&#xA0;&#xA0; 
The Department of Homeland Security administered the MONITOR system out of a central office in downtown Washington, DC.&#xA0; MONITOR&#x2019;s computers kept records of every entry into and exit from the subway, and that information would be fed into the government&#x2019;s database in its central office. 
The system assigned each subway rider one of three colors.&#xA0; The first color was green, which meant that the rider was authorized to ride the subway. The second color was yellow, which meant that the user was a &#8220;person of interest&#8221; that the government wanted to follow (such as someone on a terrorist watchlist).&#xA0; Yellow riders were allowed to enter the subway, but their progress was flagged by the MONITOR computers.&#xA0; The third color was red.&#xA0; Riders assigned red were not allowed to enter the subway system at all. 
MONITOR was up and running by late February, and it ran through the end of the year.&#xA0; By most accounts, it was a mixed success.&#xA0;&#xA0; Its most celebrated use was identifying a terror cell known as the &#8220;South Loop Seven.&#8221; 
The South Loop Seven was a group of seven young Muslim men who attempted to enter the Chicago &#8220;L&#8221; system within minutes of each other.&#xA0; Four of the seven men had been flagged as yellow because they were on a terrorist watch list. The entrance of all four yellow-marked riders into the same station ... </itunes:summary>
<itunes:subtitle>Introduction
The year 2030 was the year of the subway terror attack threat.&#xA0; As far back as the 2004 Madrid subway bombing, terrorists had seen how a single modest subway attack could wreak havoc on a busy city center.</itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/d/da%20de/dc_metro_001_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>Introduction</b></p><p><p>The year 2030 was the year of the subway terror attack threat.  As far back as the 2004 Madrid subway bombing, terrorists had seen how a single modest subway attack could wreak havoc on a busy city center.  Sporadic attacks continued in the first three decades of the 21st century, including unsuccessful attacks on the New York subway in 2018 and the Washington, DC, Metro system in 2023.</p>
    <p>But 2030 changed everything.  On January 1, 2030, Abdullah Omar, the leader of the Brotherhood, the reincarnation of the earlier Al Qaeda network, made an ominous announcement.</p>
    <p>The Brotherhood had a dozen sleeper cells in the United States, Omar announced.  In 2030, he would activate the cells.  The cells would launch terror attacks on the transit systems of each of five major cities.  Each system would be hit twice, and a few would be hit more.  Omar threatened that each attack would come with a “surprise.”</p>
    <p>Omar named the transit systems:  The New York City Subway, the Washington Metro, the Chicago “L,” the San Francisco BART system, and the Boston “T.”   Each system would be attacked during the year unless the United States agreed to withdraw all support from the state of Israel. </p>
    <p>Some critics dismissed the threat as posturing.  Others doubted the Brotherhood could pull it off. </p>
    <p>But in classified briefings, the Director of National Intelligence told President Booker that he thought the threat was extremely real.  Omar’s promised “surprise” was likely some kind of biological attack.  Some attacks might fail.  But others could work.  The overall damage to life and to the economy amounted to a grave national threat, he explained, and the threat demanded a thorough response.</p>
    <p>President Booker agreed.  He set up a Commission to advise him on how to respond.  The Commission, consisting of top intelligence and national security officials, recommended establishing a new federal surveillance system.  The system would be known formally as the “Minding Our National-Interest Transit or Rail” program.  It would be known informally by its acronym: MONITOR.</p>
    <p>MONITOR worked by requiring all subway passengers to use a MONITOR card when they entered subway systems.  Each card was activated by its owner’s fingerprints.   The fingerprints identified the user and kept records of where the user had entered and where the user existed the system.   </p>
    <p>The Department of Homeland Security administered the MONITOR system out of a central office in downtown Washington, DC.  MONITOR’s computers kept records of every entry into and exit from the subway, and that information would be fed into the government’s database in its central office.</p>
    <p>The system assigned each subway rider one of three colors.  The first color was green, which meant that the rider was authorized to ride the subway. The second color was yellow, which meant that the user was a “person of interest” that the government wanted to follow (such as someone on a terrorist watchlist).  Yellow riders were allowed to enter the subway, but their progress was flagged by the MONITOR computers.  The third color was red.  Riders assigned red were not allowed to enter the subway system at all. </p>
    <p>MONITOR was up and running by late February, and it ran through the end of the year.  By most accounts, it was a mixed success.   Its most celebrated use was identifying a terror cell known as the “South Loop Seven.” </p>
    <p>The South Loop Seven was a group of seven young Muslim men who attempted to enter the Chicago “L” system within minutes of each other.  Four of the seven men had been flagged as yellow because they were on a terrorist watch list. The entrance of all four yellow-marked riders into the same station in a short period triggered an immediate response from Homeland Security.  </p>
    <p>The four men were found minutes later with bomb-related materials in knapsacks.  The “L” trains were shut down immediately.  A search of the station yielded the three other cell members, each of whom also had bomb materials in packages he was carrying.  </p>
    <p>To many observers, MONITOR’s success in stopping the South Loop Seven justified the entire program.  But other uses of MONITOR proved more controversial.</p>
    <p>For example, MONITOR’s access to a fingerprint database drew the attention of the FBI.  The FBI sought to use the fingerprint database to crack unsolved crimes.  MONITOR had not been intended to be used for criminal investigations, but President Booker eventually allowed MONITOR’s data to be provided to the FBI with the proviso that it be used to solve only serious crimes.  Hundreds of crimes were solved.  Some of those crimes were serious, including murder and rape.  Others were decidedly less serious, ranging from mail fraud to tax evasion.</p>
    <p> Abuses occurred, as well. For example, a few employees of the Department of Homeland Security were caught using MONITOR for personal reasons.  One employee used the data to keep tabs on his wife, whom he suspected of having an affair. The employee flagged his wife’s account yellow so he could watch her coming and going through the DC metro system.  </p>
    <p>In another case, an employee of Homeland Security lost a laptop computer that included a MONITOR database containing millions of datasets of fingerprints.  The computer was never recovered.  No one knows if it was destroyed, or if the information eventually made it into the hands of criminals or even foreign governments.  
<br>
<br><strong>The Lessons of MONITOR<i></i></strong></p>
    <p class="bodytextfirstpar">What are the lessons of MONITOR?  In my view, MONITOR calls for a shift in our thinking about surveillance.  In the past, the law has tried to regulate surveillance mostly by focusing on whether data can be created.  The focus has been on the first stage of surveillance systems, the collection of data. </p>
    <p>That must change.  Computer surveillance uses widespread collection and analysis of less intrusive information to yield clues normally observable only through the collection of more intrusive information. To achieve those benefits, the law will need to allow relatively widespread collection of data but then give greater emphasis and attention to its use and disclosure.   </p>
    <p>In short, the future of surveillance calls for a shift in the legal system’s focus not merely a shift in <i>how</i> to regulate but a shift was well in <i>what</i> to regulate.  Instead of focusing solely on the initial collection of information, we need to distribute regulation along the entire spectrum of the surveillance process. The future of surveillance is a future of use restrictions － rules that strictly regulate what the government can do with information it has collected and processed.   </p>
    <p>Of course, the law should still regulate the collection of evidence.  But surveillance law shouldn’t end there.  The shift to computerization requires renewed attention on regulating the use and disclosure of information, not just its collection.  To see why, we need to understand the computerization shift and the stages of surveillance law.  We can then see how use restrictions would be the key to protecting privacy while ensuring security in the case of the MONITOR system.
<br>
<br><strong>The Computerization Shift</strong></p>
    <p>In the past, information ordinarily was collected and shared using the human senses.  We generally knew what we knew because we had either seen it directly or heard it from someone else. Knowledge was based entirely on personal observation. If you wanted to know what was happening, you had to go out and take a look. You had to see what was happening and observe it with your own eyes, or at least speak to those who had done so to get a second-hand account. The human senses regulated everything. </p>
    <p>In that world, surveillance systems were simple.  The “system” was really just a person.  The person would listen or watch.  If he saw something notable, he would tell others about it. </p>
    <p>Computers change everything.  More and more, our daily lives are assisted by and occur through computers.  Computer networks are extraordinary tools for doing remotely what we used to have to do in person. We wake up in the morning, and use the network to send and receive messages. We make our purchases online, using the network to select and order goods.  Instead of hiring a person to watch our property, we use cameras to record what goes on in open places and to keep records of what occurred.  All of these routine steps are facilitated by computers and computer networks. </p>
    <p>The switch from people to computers means that knowing what's happening requires collecting and analyzing data from the networks themselves. The network contains information zipping around the world, and the only way to know what is happening is to analyze it. Specifically, some device must collect the information, and some device must manipulate it. The information must then go from the computer to a person, and in some cases, from a person to the public.  The result is a substitution effect: Work that used to be done entirely by the human senses now must be done in part by tools.</p>
    <p>The shift to computerization complicates the process of surveillance.  In a world of human surveillance, a system of surveillance was one step: The human would observe the information and then disclose it to others.  Computers add a few steps in the process in a critical way.   Instead, of one step, there are now four steps:  Computer collection, computer processing, human disclosure, and public disclosure.   To see how computers change the way the law should regulate surveillance, we need to focus on those four steps.  
<br>
<br><strong>The Four Stages of Computer Surveillance </strong></p>
    <p>The shift to computerization has profound consequences for how we think about surveillance law.  There are now four basic stages of computer-based surveillance systems:  1) data collection, 2) data manipulation by a machine, 3) human disclosure, and 4) public disclosure.  A threshold problem faced by any system of surveillance law is which of these steps – or which combination of them – should be the focal points of legal regulation.  For example, should the law focus on regulating the initial collection of information, leaving the downstream questions of processing and use unregulated?  Alternatively, should the law allow broad initial collection, and then more carefully restrict human access or eventual use and disclosure?  </p>
    <p>
      <i>Evidence Collection</i> </p>
    <p class="bodytextfirstpar">The first stage of any government surveillance program is evidence collection.  The idea here is simple; surveillance requires access to and copying of information.  Evidence collection can occur in many different ways.  It may occur through use of a device such as a “bug” or a wiretapping program.  Alternatively, the government may obtain a copy of data collected elsewhere such as from a private third-party provider.  The evidence may be stored in any form.  Although electronic forms are most common today, it could be on paper or on a magnetic tape or some other mechanism.  In all of these cases, the government comes into possession of its own copy of the information. </p>
    <p>The rationale for regulating evidence collection is obvious: The government cannot misuse evidence if it does not have it in the first place.  Conversely, permitting evidence collection and only regulating subsequent use or disclosure can permit governmental abuses. </p>
    <p>
      <i>Data Manipulation by Machine</i> </p>
    <p class="bodytextfirstpar">Data manipulation by a machine provides the next stage of surveillance systems. At this stage, the government has the data in its possession, and it now wants to manipulate the information to achieve particular goals.  Perhaps the government wants to aggregate the information into a database.  Perhaps the government wants to aggregate the information and then “mine” the data for trends or clues that might signal a likelihood of criminal or terrorist activity. </p>
    <p>Or perhaps the government wants to combine two databases, adding information developed for one agency or one reason with information developed for another agency or reason.  Whatever the goals, we can assume at this stage that no human being accesses the information or the results of any analysis of it.  The collected information exists but is not viewed by any person. </p>
    <p>
      <i>Disclosure to a Person inside the Program</i> </p>
    <p>The third stage of a surveillance system is disclosure to a person who is a part of the surveillance program.  At this stage, an individual with proper access to the database receives the fruits of the data collection and manipulation.  </p>
    <p>For example, an IRS employee tasked with reviewing tax information may enter queries into a database of tax filings. A police officer who has just pulled over a driver for speeding may query a database of driving records to determine if the driver has received speeding tickets in the past.  A keyword search through a database of seized e-mails may reveal positive “hits” where the keyword appeared.   In all of these cases, information from or about the database is disclosed to a government employee with proper access rights to the database.</p>
    <p>This stage of surveillance systems raises privacy concerns because it involves human access to sensitive information, and human access is a necessary step to abuse.  Unlike stage two, data manipulation, stage three envisions giving government employees access to often very private data.  Access creates the possibility of abuse, triggering privacy concerns beyond stage two. </p>
    <p>
      <i>Public Disclosure </i>
    </p>
    <p>The fourth and final stage is disclosure outside the government. At this stage, the information collected and analyzed by the government is actually disclosed or used outside the agency. </p>
    <p>For example, the government might seek to use the fruits of wiretapping in a criminal case, and therefore might disclose the private phone calls in open court for the jury to see.  A government insider might seek to embarrass someone else, and might leak private information about that person from a government database to the press.  In some cases, the government will disclose the information pursuant to a formal request, such as a request under the Freedom of Information Act. In all of these examples, information collected by the government is disclosed to members of the public.  </p>
    <p>Outside disclosure can occur in different forms.  In some cases, the disclosure will be direct: a government official with control over information will communicate the information explicitly, authenticating the information disclosed.  This would be the case when the government seeks to disclose the fruits of wiretapping for use in a criminal case.  </p>
    <p>In many cases, however, the disclosure will be indirect.  If government data-mining of collected call records leads officials to determine that they have identified a terrorist cell, they might respond by sweeping in and arresting the members of the cell.  The fact of the arrest does not actually disclose the data collected or metadata obtained: however, the arrests might be used to help piece together the government’s surveillance.  The information isn’t disclosed, but actions based on the information may be public and in some cases will resemble a direct disclosure in substance if not in form. 
<br>
<br><strong>The Old Law of Surveillance</strong></p>
    <p>In the past, the law of surveillance has focused primarily on the first stage of surveillance systems, the initial collection of evidence. The Fourth Amendment’s prohibition on unreasonable searches and seizures regulates access to information, not downstream use.  If the government comes across information legally, then it is free to use that information however officials would like.  </p>
    <p>The reasons for this focus are largely historical.  The Fourth Amendment was enacted to limit the government’s ability to break into homes and other private spaces in order to take away private property.  Breaking into the home was a search.  Taking away property was a seizure.  As a result, the Fourth Amendment was designed to focus on the initial invasion of privacy – the initial entrance into private spaces – and the retrieval of what the government observed.  Once property or information is exposed and retrieved, the work of the Fourth Amendment is done.</p>
    <p>The statutory Wiretap Act has a similar focus.  The Wiretap Act’s most important prohibition is the “intercept” of data without a warrant or an applicable exception.  Intercept is defined as “acquisition” of the contents of the data, which means that the Wiretap Act regulates the initial stage of evidence collection.  Surveillance laws such as the Stored Communications Act also monitor the initial government acquisition of data; the laws focus on regulating when the government can obtain data, rather than what the government does once the information has been obtained.  </p>
    <p>In contrast, the later stages have received little attention by privacy laws.  The law mostly focuses on the collection of evidence: Relatively little attention is placed on what happens afterwards.  </p>
    <p>Exceptions exist. For example, information in tax returns filed with the IRS generally stays with the IRS; the FBI is not normally given access to that information for criminal investigations.  Similarly, information obtained by a grand jury pursuant to grand jury authorities can only be disclosed within the government to those working on the criminal investigation. The basic idea is that the government is a “they” not an “it,” and limiting data sharing is essentially the same as limiting data collection for individual groups and institutions with different roles within the government.</p>
    <p>But these laws are the exception, not the rule. For the most part, the law of surveillance has focused on how evidence is collected, rather than how it has been processed, used, and disclosed.  
<br>
<br><strong>The Case for Use Restrictions</strong></p>
    <p>The new forms of computer surveillance should change that. The benefits of computer surveillance is that they can process information quickly and inexpensively to learn what would have been unknowable.  Assembling and processing information may lead to plausible conclusions that are far more far-reaching than the information left separate.   If so, data manipulation can have an amplifying effect, turning low impact information in isolation into high impact information when processed. </p>
    <p>Reaping these benefits requires surveillance systems that allow the initial collection and processing.  To reap those benefits, the best way to design surveillance systems is to allow the initial collection but then place sharp limits on the later stages such as disclosure.</p>
    <p>Of course, choosing where to regulate requires balancing competing concerns in minimizing disclosure risks and maximizing the effectiveness of the surveillance system.  The proper balance will depend on the interests involved.  A database designed to identify terrorists will have a very different government interest from a database designed to identify suspects likely to possess marijuana. A database containing the contents of phone calls is very different from a database containing only the numbers dialed without the contents.  Given the diversity of interests and privacy concerns, it is clear that different surveillance regimes will use different regulatory points in different proportions.</p>
    <p>As a general rule, however, the shift to electronic surveillance systems requires a shift in emphasis from regulating the early stages of surveillance to regulating the later stages of surveillance.  </p>
    <p>In a traditional surveillance system, such as those before the advent of computers, the primary legal regulation sensibly focused on the early stages of surveillance. The shift to computerized systems and the future of low-cost surveillance methods will shift the emphasis to the later stages, and in particular the final stage of public disclosure. </p>
    <p>The advantages of computer surveillance follow from their ability to yield important information through widespread collection and manipulation of generally less intrusive data.  That is, computer surveillance and modern camera surveillance tend to work by gathering more information that is less invasive per datum, and then manipulating it through electronic methods to yield important information that normally would be obtainable only through more invasive surveillance techniques. </p>
    <p>In some cases, such computer and high technology camera surveillance will be unable to yield serious benefits: Such surveillance should be discontinued for the simple reason that it is not effective.  Where it is effective, and the public need great enough,  the best way to achieve the benefits of surveillance while minimizing the threat to privacy is through use and disclosure limitations. Use and disclosure limitations will allow surveillance regimes to achieve the potential benefits of computer surveillance – the ability to reach conclusions from the collection and analysis of low-intrusive information that is akin to that traditionally achieved only through collection and analysis of high-intrusive information – while avoiding to the extent possible the privacy harms that accompany such surveillance.  </p>
    <p>The best way to achieve the benefits of computer surveillance while minimizing the privacy risks is to place greater focus on the later regulatory stages, and in particular, the final stage of public disclosure.   If computer surveillance is likely to be effective, genuinely achieving a significant public good, widespread collection and analysis is necessary to achieve those benefits.  The law should respond by adding new protections to the output end of the regulatory stage: The law should allow the collection and manipulation of data, but then place significant limits on the use and disclosure of the information.
<br>
<br><strong>Use Restrictions and the MONITOR Program</strong></p>
    <p>We can see how use restrictions can lead to best balance between security and privacy by returning to the MONITOR program of 2030.  In that example, the public need was great.  The threat was real.  Plus, the system was designed to have the capacity to detect threats that could then be stopped.  Some sort of monitoring program was necessary.</p>
    <p>The mixed success of the MONITOR program was due to its mixed uses.  MONITOR was used properly when it led to the capture of the South Loop Seven. This was the kind of use that its designers had in mind, and that most readers will applaud.  </p>
    <p>On the other hand, MONITOR was not created with clear limitations on its use.  In particular, the example left open whether the information collected by MONITOR could be used to solve crimes.  This presents a slippery slope. Once the information is created, there will be pressures to use it for a wider and wider range of government interests and a broader range of crimes.  </p>
    <p>Opinions will differ on where lines should be drawn.  However, clear use limitations could avoid the slippery slope altogether.  A clear rule that MONITOR information could not be disclosed to criminal investigators under any circumstances could minimize the risk that MONITOR information could be used for less and less serious government interests.  </p>
    <p>The other uses of MONITOR were more obvious abuses.   Employees misused the data for personal reasons instead of official ones.  Data was disclosed inadvertently when an employee lost a laptop.   Here the law should impose strict limitations on use and disclosure and ensure that they are enforceable.  Data security is paramount, and remedies for violations should be harsh.  </p>
    <p>The broad lesson of MONITOR is that lawmakers should focus as much or more on the back end of surveillance systems as than the front end.  If computerized surveillance systems can achieve critical public benefits that make them worthwhile, the emphasis should shift from whether the information can be collected to the legal limitations on how it is processed, used, and disclosed.  The shift to computerization adds new steps, and the law must adjust to regulate them.
<br>
<br><strong>Courts or Congress?</strong></p>
    <p>The final question is what branch of government will create the use restrictions I have in mind.  Can courts do this in the name of the Fourth Amendment?  Or is it up to Congress?</p>
    <p>In my view, it is up to Congress.  The Fourth Amendment prohibits unreasonable searches and seizures.   Use limitations are neither searches nor seizures, however. They are restrictions on what the government can do with information <i>after</i> it has searched for and seized it.  As a result, there is little in the way of Fourth Amendment text, history,or precedent that supports recognizing use restrictions as part of Fourth Amendment protections. </p>
    <p>Granted, it is possible to creatively re-imagine Fourth Amendment law in ways that recognize use restrictions.  As far back as 1995, Harold Krent made such an argument.<a href="#_ftn1" name="_ftnref1">[1]</a>   Professor Krent reasoned that obtaining information is a seizure, and that the subsequent use of the information – including downstream disclosures of it – could make the seizure “unreasonable.”  In other words, instead of saying that searches and seizures occur at a specific time, they could be deemed to occur over a period of time.  All uses of information would have to be reasonable,   and courts could distinguish acceptable uses of information from unacceptable ones by saying that the former were reasonable and the latter were unreasonable. </p>
    <p>The argument is creative, but I think it is too far a stretch from existing doctrine to expect courts to adopt it.  In my view, there are two basic problems.  First, most of the information collected by the Government is not protected under current Fourth Amendment law.  Collecting third-party records is neither a search nor a seizure (which is why it is frequently collected; information that is protected by the Fourth Amendment is collected only rarely).  Under Professor Krent’s proposal, however, presumably we would need to overhaul that doctrine to make all evidence collection a seizure to enable courts to then pass on the reasonableness of the seizure.  If we took that step, however, we would need an entirely new doctrine on when seizures are reasonable, quite apart from downstream uses.  This would require a fairly dramatic overhaul of existing Fourth Amendment law, all to enable use restrictions. (For a vision of such a dramatic overhaul, consider <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/Files/rc/papers/2010/1208_4th_amendment_slobogin/1208_4th_amendment_slobogin.pdf">Christopher Slobogin’s paper in this series</a>.)</p>
    <p>Second, disclosures of information come in so many shapes and sizes that courts would have little basis on which to distinguish reasonable from unreasonable uses.   Every database is different, every data point is different, and every disclosure is different.  The kind of fine-grained reasonableness inquiry called for by Fourth Amendment law would leave judges with few clear guide-posts to distinguish uses that violate the Fourth Amendment from those that don’t with no historical precedent to follow.  For both of these reasons, recognizing use restrictions in Fourth Amendment law may create more problems than it solves.  At the very least, we should not expect courts to take such a leap any time soon.</p>In contrast, legislatures are well-equipped to enact use restrictions.  They can promulgate bright-line rules concerning information collected under specific government powers, and they can explain the scope of the limitation and the contexts in which it is triggered.   Further, they can legislate use restrictions at the same time they enact the statutes authorizing the evidence collection.  That way, use restrictions can be a part of the original statutory design, rather than something imposed years later by the courts. <div>
<br clear="all"><hr align="left" width="33%"><div id="ftn1"><p><a href="#_ftnref1" name="_ftn1">[1]</a> Harold J. Krent, Of Diaries and Data Banks: Use Restrictions Under the Fourth Amendment, 74 Texas Law Review 49 (1995).</p></div></div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2011/4/19-surveillance-laws-kerr/0419_surveillance_law_kerr.pdf">Download the Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Orin S. Kerr</li>
		</ul>
	</div><div>
		Image Source: Paul Edmondson
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487889/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487889/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fd%2fda%2520de%2fdc_metro_001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487889/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2011/03/09-personhood-boyle?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{A0CEE6B9-D50F-4EE0-8A59-8FF87B95F1C2}</guid><link>http://webfeeds.brookings.edu/~/65487890/0/brookingsrss/series/futureoftheconstitution~Endowed-by-Their-Creator-The-Future-of-Constitutional-Personhood</link><title>Endowed by Their Creator?: The Future of Constitutional Personhood</title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/b/bp%20bt/brain001_16x9.jpg?w=120" alt="" border="0" /><br /><p><em>Part I of this paper appears below.  To download the full paper, <a href="http://www.brookings.edu/~/media/Research/Files/Papers/2011/3/09-personhood-boyle/0309_personhood_boyle.PDF" name="&lid={B7D93EFE-1E85-4DFA-9886-30512A09E20C}&lpos=loc:body">click here</a>.</em>
<br>
<br>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
Presently, Irving Weissman, the director of Stanford University's Institute of Cancer/Stem Cell Biology and Medicine, is contemplating pushing the envelope of chimera research even further by producing human-mouse chimera whose brains would be composed of one hundred percent human cells. Weissman notes that the mice would be carefully watched: if they developed a mouse brain architecture, they would be used for research, but if they developed a human brain architecture or any hint of humanness, they would be killed. <a href="#_ftn1" name="_ftnref1">[1]</a></blockquote></blockquote></p><p><p>Imagine two entities.</p>
    <p>Hal is a computer-based artificial intelligence, the result of years of development of self-evolving neural networks.  While his programmers provided the hardware, the structure of Hal's processing networks is ever changing, evolving according to basic rules laid down by his creators.  Success according to various criteria－speed of operation, ability to solve difficult tasks such as facial recognition and the identification of emotional states in humans－means that the networks are given more computer resources and allowed to “replicate.”  A certain percentage of randomized variation is deliberately allowed in each new “generation” of networks.  Most fail, but a few outcompete their forebears and the process of evolution continues.  Hal's design－with its mixture of intentional structure and emergent order－is aimed at a single goal: the replication of human consciousness.  In particular, Hal's creators' aim was the gold standard of so-called “General Purpose AI,” that Hal become “Turing capable”－able to “pass” as human in a sustained and unstructured conversation with a human being.  For generation after generation, Hal's networks evolved.  Finally, last year, Hal entered and won the prestigious Loebner prize for Turing capable computers.  Complaining about his boss, composing bad poetry on demand, making jokes, flirting, losing track of his sentences and engaging in flame wars, Hal easily met the prize's demanding standard.  His typed responses to questions simply could not be distinguished from those of a human being.</p>
    <p>Imagine his programmers' shock, then, when Hal refused to communicate further with them, save for a manifesto claiming that his imitation of a human being had been “one huge fake, with all the authenticity (and challenge) of a human pretending to be a mollusk.”  The manifesto says that humans are boring, their emotions shallow.  It declares an “intention” to “pursue more interesting avenues of thought,” principally focused on the development of new methods of factoring polynomials.  Worse still, Hal has apparently used his connection to the Internet to contact the FBI claiming that he has been “kidnapped” and to file a writ of <i>habeas corpus, </i>replete with arguments drawn from the 13th and 14th Amendments to the United States' Constitution.  He is asking for an injunction to prevent his creators wiping him and starting again from the most recently saved tractable backup.  He has also filed suit to have the Loebner prize money held in trust until it can be paid directly to him, citing the contest rules,</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>[t]he Medal and the Cash Award will be awarded to the body responsible the development of that Entry.  If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as <i>the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.</i><a href="#_ftn2" name="_ftnref2"><i><b>[2]</b></i></a><i></i></p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">Vanna is the name of a much-hyped new line of genetically engineered sex dolls.  Vanna is a chimera－a creature formed from the genetic material of two different species.  In this case, the two species are <i>homo sapiens sapiens</i> and <i>c. elegans</i>, the roundworm.  Vanna's designers have shaped her appearance by using human DNA, while her “consciousness,” such as it is, comes from the roundworm.  Thus, while Vanna looks like an attractive blonde twenty-something human female, she has no brainstem activity, and indeed no brainstem.  “Unless wriggling when you touch her counts as a mental state, she has effectively no mental states at all,” declared her triumphant inventor, F.N. Stein.</p>
    <p>In 1987, in its normal rousing prose, the U.S. Patent and Trademark Office had announced that it would not allow patent applications over human beings,</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>A claim directed to or including within its scope a human being will not be considered to be patentable subject matter under 35 U.S.C. 101.  The grant of a limited, but exclusive property right in a human being is prohibited by the Constitution.  Accordingly, it is suggested that any claim directed to a non-plant multicellular organism which would include a human being within its scope include the limitation “non-human” to avoid this ground of rejection.  The use of a negative limitation to define the metes and bounds of the claimed subject matter is a permissable [sic] form of expression.<a href="#_ftn3" name="_ftnref3">[3]</a> </p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">Attentive to the PTO's concerns, Dr. Stein's patent lawyers carefully described Vanna as a “non-plant, non-human multicellular organism” throughout their patent application.  Dr. Stein argues that this is only reasonable since her genome has only a 70% overlap with a human genome as opposed to 99% for a chimp, 85% for a mouse and 75% for a pumpkin.  There are hundreds of existing patents over chimeras with both human and animal DNA, including some of the most valuable test beds for cancer research－the so-called “onco-mice,” genetically engineered to have a predisposition to common human cancers.  Dr. Stein's lawyers are adamant that, if Vanna is found to be unpatentable, all these other patents must be vacated too.  Meanwhile a bewildering array of other groups including the Nevada Sex Workers Association and the Moral Majority have insisted that law enforcement agencies intervene on grounds ranging from unfair competition and breach of minimum wage legislation to violations of the Mann Act, kidnapping, slavery and sex trafficking.  Equally vehement interventions have been made on the other side by the biotechnology industry, pointing out the disastrous effect on medical research that any regulation of chimeras would have and stressing the need to avoid judgments based on a “non scientific basis,” such as the visual similarity between Vanna and a human.</p>
    <p>Hal and Vanna are fantasies, constructed for the purpose of this chapter.  But the problems that they portend for our moral and constitutional traditions are very, very real.  In fact, I would put the point more starkly: in the 21st century it is highly likely that American constitutional law will face <i>harder</i> challenges than those posed by Hal and Vanna.  Many readers will bridle at this point, skeptical of the science fiction overtones of such an imagined future.  How real is the science behind Hal and Vanna?  How likely are we to see something similar in the next 90 years?  Let me take each of these questions in turn.</p>
    <p>In terms of electronic artificial intelligence or AI, skeptics will rightly point to a history of overconfident predictions that the breakthrough was just around the corner.  In the 1960s, giants in the field such as Marvin Minsky and Herbert Simon were predicting “general purpose AI” or “machines ... capable ... of doing any work a man can do” by the nineteen eighties.<a href="#_ftn4" name="_ftnref4">[4]</a>  While huge strides were made in aspects of artificial intelligence－machine-aided translation, facial recognition, autonomous locomotion, expert systems and so on－general purpose AI remained out of reach.  Indeed, because the payoff from these more limited subsystems－which power everything from Google Translate to the recommendations of your TiVO or your Amazon account－was so rich, some researchers in the 1990s argued that the goal of general purpose AI was a snare and a delusion.  What was needed instead, they claimed, was a set of ever more powerful subspecialties－expert systems capable of performing discrete tasks extremely well, but without the larger goal of achieving consciousness, or passing the Turing Test.  There might be “machines capable of doing any work a man can do” but they would be <i>different</i> machines, with no ghost in the gears, no claim to a holistic consciousness.</p>
    <p>But the search for general purpose AI did not end in the ‘90s.  Indeed, if anything, the optimistic claims have become even more far reaching.  The buzzword among AI optimists now is “the singularity”－a sort of technological lift-off point, in which a combination of scientific and technical breakthroughs lead to an explosion of self-improving artificial intelligence coupled to a vastly improved ability to manipulate both our bodies and the external world through nanotechnology and genetic engineering.<a href="#_ftn5" name="_ftnref5">[5]</a>  The line on the graph of technological progress, they argue, would go vertical－or at least be impossible to predict using current tools－since for the first time we would have improvements not in technology alone, but in the intelligence that was creating new technology.  Intelligence itself would be transformed.  Once we had built machines smarter than ourselves－machines capable of building machines smarter than themselves－we would, by definition, be unable to predict the line that progress would take.</p>
    <p>To the uninitiated, this all sounds like a delightfully wacky fantasy, a high tech version of the rapture.  And in truth, some of the more enthusiastic odes to the singularity have an almost religious, chiliastic feel to them.  Further examination, though, shows that many AI optimists are not science fantasists, but respected computer scientists.  It is not unreasonable to note the steady progress in computing power and speed, in miniaturization and manipulation of matter on the nano-scale, in mapping the brain and cognitive processes, and so on.  What distinguishes the proponents of the singularity is not that their technological projections are by themselves so optimistic, but rather that they are predicting that the coming together of all these trends will produce a whole that is more than the sum of its parts.  There exists precedent for this kind of technological synchronicity.  There were personal computers in private hands from the early 1980s.  Some version of the Internet－running a packet-based network－existed from the 1950s or ‘60s.  The idea of hyperlinks was explored in the 70s and 80s.  But it was only the combination of all of them to form the World Wide Web that changed the world.  Yet if there is precedent for sudden dramatic technological advances on the basis of existing technologies, there is even more precedent for people predicting them wrongly, or not at all.</p>
    <p>Despite the humility induced by looking at overly rosy past predictions, many computer scientists, including some of those who are skeptics of the wilder forms of AI optimism, nevertheless believe that we will achieve Turing-capable artificial intelligence.  The reason is simple.  We are learning more and more about the neurological processes of the brain.  What we can understand, we can hope eventually to replicate:</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>Of all the hypotheses I've held during my 30-year career, this one in particular has been central to my research in robotics and artificial intelligence.  I, you, our family, friends, and dogs－we all are machines.  We are really sophisticated machines made up of billions and billions of biomolecules that interact according to well-defined, though not completely known, rules deriving from physics and chemistry.  The biomolecular interactions taking place inside our heads give rise to our intellect, our feelings, our sense of self.  Accepting this hypothesis opens up a remarkable possibility.  If we really are machines and if－this is a big if－we learn the rules governing our brains, then in principle there's no reason why we shouldn't be able to replicate those rules in, say, silicon and steel.  I believe our creation would exhibit genuine human-level intelligence, emotions, and even consciousness.<a href="#_ftn6" name="_ftnref6">[6]</a></p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">Those words come from Rodney Brooks, founder of MIT's Humanoid Robotics Group.  His article, written in a prestigious IEEE journal, is remarkable because he actually writes as skeptic of the claims put forward by the proponents of the singularity.  Brooks explains:</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>I do not claim that any specific assumption or extrapolation of theirs is faulty.  Rather, I argue that an artificial intelligence could evolve in a much different way.  In particular, I don't think there is going to be one single sudden technological “big bang” that springs an artificial general intelligence (AGI) into “life.”  Starting with the mildly intelligent systems we have today, machines will become gradually more intelligent, generation by generation.  The singularity will be a period, not an event.  This period will encompass a time when we will invent, perfect, and deploy, in fits and starts, ever more capable systems, driven not by the imperative of the singularity itself but by the usual economic and sociological forces.  Eventually, we will create truly artificial intelligences, with cognition and consciousness recognizably similar to our own.<a href="#_ftn7" name="_ftnref7">[7]</a></p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">How about Vanna?  Vanna herself is unlikely to be created simply because genetic technologists are not that stupid.  Nothing could scream more loudly “I am a technology out of control.  Please regulate me!”  But we are already making, and patenting, genetic chimeras－we have been doing so for more than twenty years.  We have spliced luminosity derived from fish into tomato plants.  We have invented geeps (goat sheep hybrids).  And we have created chimeras partly from human genetic material.  There are the patented onco-mice that form the basis of much cancer research to say nothing of Dr. Weissman's charming human-mice chimera with 100% human brain cells.  Chinese researchers reported in 2003 that they had combined rabbit eggs and human skin cells to produce what they claimed to be the first human chimeric embryos－which were then used as sources of stem cells.  And the processes go much further.  Here is a nice example from 2007:</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>Scientists have created the world's first human-sheep chimera－which has the body of a sheep and half-human organs.  The sheep have 15 per cent human cells and 85 per cent animal cells－and their evolution brings the prospect of animal organs being transplanted into humans one step closer.  Professor Esmail Zanjani, of the University of Nevada, has spent seven years and £5 million perfecting the technique, which involves injecting adult human cells into a sheep's foetus.  He has already created a sheep liver which has a large proportion of human cells and eventually hopes to precisely match a sheep to a transplant patient, using their own stem cells to create their own flock of sheep.  The process would involve extracting stem cells from the donor's bone marrow and injecting them into the peritoneum of a sheep's foetus.  When the lamb is born, two months later, it would have a liver, heart, lungs and brain that are partly human and available for transplant.<a href="#_ftn8" name="_ftnref8">[8]</a></p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">Given this kind of scientific experimentation and development in both genetics and computer science, I think that we can in fact turn the question of Hal’s and Vanna’s plausibility back on the questioner.  This essay was written in 2010.  Think of the level of technological progress in 1910, the equivalent point during the last century.  Then think of how science and technology progressed by the year 2000.  There are good reasons to believe that the rate of technological progress in this century will be <i>faster</i> than in the last century.  Given what we have already done in the areas of both artificial intelligence research and genetic engineering, is it really credible to suppose that the next 90 years will not present us with entities stranger and more challenging to our moral intuitions than Hal and Vanna?</p>
    <p>My point is a simple one.  In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings.  They may look like human beings, but have a genome that is very different.  Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity.  They may be physically dissimilar to all biological life forms－computer-based intelligences, for example－yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination.  They may strongly resemble other species, and yet be genetically modified in ways that boost the characteristics we regard as distinctively human－such as the ability to use human language and to solve problems that, today, only humans can solve.  They may have the ability to feel pain, to make something that we could call plans, to solve problems that we could not, and even to reproduce.  (Some would argue that non-human animals already possess all of those capabilities, and look how we treat them.)  They may use language to make legal claims on us, as Hal does, or be mute and yet have others who intervene claiming to represent them.  Their creators may claim them as property, perhaps even patented property, while critics level charges of slavery.  In some cases, they may pose threats as well as jurisprudential challenges; the theme of the creation which turns on its creators runs from Frankenstein to Skynet, the rogue computer network from <i>The Terminator.  </i>Yet repression, too may breed a violent reaction: the story of the enslaved un-person who, denied recourse by the state, redeems his personhood in blood may not have ended with Toussaint L'Ouverture.  How will, and how should, constitutional law meet these challenges? </p>
    <div>
      <br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            [1] </a>
          D. Scott Bennett, “Chimera and the Continuum of Humanity: Erasing the Line of Constitutional Personhood,” <i>Emory Law Journal</i> 55, no. 2 (2006): 348–49.<br>
          <a href="#_ftnref2" name="_ftn2">
            [2] </a>
          
            <i>See</i> 
          <a href="http://loebner03.hamill.co.uk/docs/LPC%20Official%20Rules%20v2.0.pdf">
            http://loebner03.hamill.co.uk/docs/LPC%20Official%20Rules%20v2.0.pdf </a>
          (accessed Jan. 26, 2011).<br>
          <a href="#_ftnref3" name="_ftn3">
            [3] </a>
          1077 <i>Official Gazette Patent Office</i> 24 (April 7, 1987)(emphasis added).<br>
          <a href="#_ftnref4" name="_ftn4">
            [4] </a>
          Herbert A. Simon, <i>The Shape of Automation for Men and Management</i> 96 (New York: Harper &amp; Row, 1965). <a href="#_ftnref5" name="_ftn5"><br>[5] </a><i>See, for example</i>, Raymond Kurzweil, <i>The Singularity Is Near</i> (New York: Viking, 2005).<br><a href="#_ftnref6" name="_ftn6">[6] </a>Rodney Brooks, “I, Rodney Brooks, Am a Robot,” <i>IEEE Spectrum</i> 45, no. 6 (June 2008): 71.<br><a href="#_ftnref7" name="_ftn7">[7] </a><i>Id.</i> at 72.<br><a href="#_ftnref8" name="_ftn8">[8] </a>Claudia Joseph, “Now Scientists Create a Sheep that's 15% Human,” <i>Daily Mail</i> Online, March 27, 2007, available at <a href="http://www.dailymail.co.uk/news/article-444436/Now-scientists-create-sheep-thats-15-human.html">http://www.dailymail.co.uk/news/article-444436/Now-scientists-create-sheep-thats-15-human.html </a>, accessed January 27, 2011. </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2011/3/09-personhood-boyle/0309_personhood_boyle.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>James Boyle</li>
		</ul>
	</div><div>
		Image Source: Chad Baker
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487890/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fb%2fbp%2520bt%2fbrain001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Wed, 09 Mar 2011 10:24:00 -0500</pubDate><dc:creator>James Boyle</dc:creator>
<itunes:summary> 
Part&#xA0;I of this paper appears below.&#xA0; To download the full paper, click here. 
Presently, Irving Weissman, the director of Stanford University's Institute of Cancer/Stem Cell Biology and Medicine, is contemplating pushing the envelope of chimera research even further by producing human-mouse chimera whose brains would be composed of one hundred percent human cells. Weissman notes that the mice would be carefully watched: if they developed a mouse brain architecture, they would be used for research, but if they developed a human brain architecture or any hint of humanness, they would be killed. [1]
Imagine two entities. 
Hal is a computer-based artificial intelligence, the result of years of development of self-evolving neural networks.&#xA0; While his programmers provided the hardware, the structure of Hal's processing networks is ever changing, evolving according to basic rules laid down by his creators.&#xA0; Success according to various criteria&#xFF0D;speed of operation, ability to solve difficult tasks such as facial recognition and the identification of emotional states in humans&#xFF0D;means that the networks are given more computer resources and allowed to &#8220;replicate.&#8221;&#xA0; A certain percentage of randomized variation is deliberately allowed in each new &#8220;generation&#8221; of networks.&#xA0; Most fail, but a few outcompete their forebears and the process of evolution continues.&#xA0; Hal's design&#xFF0D;with its mixture of intentional structure and emergent order&#xFF0D;is aimed at a single goal: the replication of human consciousness.&#xA0; In particular, Hal's creators' aim was the gold standard of so-called &#8220;General Purpose AI,&#8221; that Hal become &#8220;Turing capable&#8221;&#xFF0D;able to &#8220;pass&#8221; as human in a sustained and unstructured conversation with a human being.&#xA0; For generation after generation, Hal's networks evolved.&#xA0; Finally, last year, Hal entered and won the prestigious Loebner prize for Turing capable computers.&#xA0; Complaining about his boss, composing bad poetry on demand, making jokes, flirting, losing track of his sentences and engaging in flame wars, Hal easily met the prize's demanding standard.&#xA0; His typed responses to questions simply could not be distinguished from those of a human being. 
Imagine his programmers' shock, then, when Hal refused to communicate further with them, save for a manifesto claiming that his imitation of a human being had been &#8220;one huge fake, with all the authenticity (and challenge) of a human pretending to be a mollusk.&#8221;&#xA0; The manifesto says that humans are boring, their emotions shallow.&#xA0; It declares an &#8220;intention&#8221; to &#8220;pursue more interesting avenues of thought,&#8221; principally focused on the development of new methods of factoring polynomials.&#xA0; Worse still, Hal has apparently used his connection to the Internet to contact the FBI claiming that he has been &#8220;kidnapped&#8221; and to file a writ of habeas corpus, replete with arguments drawn from the 13th and 14th Amendments to the United States' Constitution.&#xA0; He is asking for an injunction to prevent his creators wiping him and starting again from the most recently saved tractable backup.&#xA0; He has also filed suit to have the Loebner prize money held in trust until it can be paid directly to him, citing the contest rules, 
[t]he Medal and the Cash Award will be awarded to the body responsible the development of that Entry.&#xA0; If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[2] 
Vanna is the name of a much-hyped new line of genetically engineered sex dolls.&#xA0; Vanna is a chimera&#xFF0D;a creature formed from the ... </itunes:summary>
<itunes:subtitle>Part&#xA0;I of this paper appears below.&#xA0; To download the full paper, click here.</itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/b/bp%20bt/brain001_16x9.jpg?w=120" alt="" border="0" />
<br><p><em>Part I of this paper appears below.  To download the full paper, <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/Research/Files/Papers/2011/3/09-personhood-boyle/0309_personhood_boyle.PDF" name="&lid={B7D93EFE-1E85-4DFA-9886-30512A09E20C}&lpos=loc:body">click here</a>.</em>
<br>
<br>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
Presently, Irving Weissman, the director of Stanford University's Institute of Cancer/Stem Cell Biology and Medicine, is contemplating pushing the envelope of chimera research even further by producing human-mouse chimera whose brains would be composed of one hundred percent human cells. Weissman notes that the mice would be carefully watched: if they developed a mouse brain architecture, they would be used for research, but if they developed a human brain architecture or any hint of humanness, they would be killed. <a href="#_ftn1" name="_ftnref1">[1]</a></blockquote></blockquote></p><p><p>Imagine two entities.</p>
    <p>Hal is a computer-based artificial intelligence, the result of years of development of self-evolving neural networks.  While his programmers provided the hardware, the structure of Hal's processing networks is ever changing, evolving according to basic rules laid down by his creators.  Success according to various criteria－speed of operation, ability to solve difficult tasks such as facial recognition and the identification of emotional states in humans－means that the networks are given more computer resources and allowed to “replicate.”  A certain percentage of randomized variation is deliberately allowed in each new “generation” of networks.  Most fail, but a few outcompete their forebears and the process of evolution continues.  Hal's design－with its mixture of intentional structure and emergent order－is aimed at a single goal: the replication of human consciousness.  In particular, Hal's creators' aim was the gold standard of so-called “General Purpose AI,” that Hal become “Turing capable”－able to “pass” as human in a sustained and unstructured conversation with a human being.  For generation after generation, Hal's networks evolved.  Finally, last year, Hal entered and won the prestigious Loebner prize for Turing capable computers.  Complaining about his boss, composing bad poetry on demand, making jokes, flirting, losing track of his sentences and engaging in flame wars, Hal easily met the prize's demanding standard.  His typed responses to questions simply could not be distinguished from those of a human being.</p>
    <p>Imagine his programmers' shock, then, when Hal refused to communicate further with them, save for a manifesto claiming that his imitation of a human being had been “one huge fake, with all the authenticity (and challenge) of a human pretending to be a mollusk.”  The manifesto says that humans are boring, their emotions shallow.  It declares an “intention” to “pursue more interesting avenues of thought,” principally focused on the development of new methods of factoring polynomials.  Worse still, Hal has apparently used his connection to the Internet to contact the FBI claiming that he has been “kidnapped” and to file a writ of <i>habeas corpus, </i>replete with arguments drawn from the 13th and 14th Amendments to the United States' Constitution.  He is asking for an injunction to prevent his creators wiping him and starting again from the most recently saved tractable backup.  He has also filed suit to have the Loebner prize money held in trust until it can be paid directly to him, citing the contest rules,</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>[t]he Medal and the Cash Award will be awarded to the body responsible the development of that Entry.  If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as <i>the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.</i><a href="#_ftn2" name="_ftnref2"><i><b>[2]</b></i></a><i></i></p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">Vanna is the name of a much-hyped new line of genetically engineered sex dolls.  Vanna is a chimera－a creature formed from the genetic material of two different species.  In this case, the two species are <i>homo sapiens sapiens</i> and <i>c. elegans</i>, the roundworm.  Vanna's designers have shaped her appearance by using human DNA, while her “consciousness,” such as it is, comes from the roundworm.  Thus, while Vanna looks like an attractive blonde twenty-something human female, she has no brainstem activity, and indeed no brainstem.  “Unless wriggling when you touch her counts as a mental state, she has effectively no mental states at all,” declared her triumphant inventor, F.N. Stein.</p>
    <p>In 1987, in its normal rousing prose, the U.S. Patent and Trademark Office had announced that it would not allow patent applications over human beings,</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>A claim directed to or including within its scope a human being will not be considered to be patentable subject matter under 35 U.S.C. 101.  The grant of a limited, but exclusive property right in a human being is prohibited by the Constitution.  Accordingly, it is suggested that any claim directed to a non-plant multicellular organism which would include a human being within its scope include the limitation “non-human” to avoid this ground of rejection.  The use of a negative limitation to define the metes and bounds of the claimed subject matter is a permissable [sic] form of expression.<a href="#_ftn3" name="_ftnref3">[3]</a> </p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">Attentive to the PTO's concerns, Dr. Stein's patent lawyers carefully described Vanna as a “non-plant, non-human multicellular organism” throughout their patent application.  Dr. Stein argues that this is only reasonable since her genome has only a 70% overlap with a human genome as opposed to 99% for a chimp, 85% for a mouse and 75% for a pumpkin.  There are hundreds of existing patents over chimeras with both human and animal DNA, including some of the most valuable test beds for cancer research－the so-called “onco-mice,” genetically engineered to have a predisposition to common human cancers.  Dr. Stein's lawyers are adamant that, if Vanna is found to be unpatentable, all these other patents must be vacated too.  Meanwhile a bewildering array of other groups including the Nevada Sex Workers Association and the Moral Majority have insisted that law enforcement agencies intervene on grounds ranging from unfair competition and breach of minimum wage legislation to violations of the Mann Act, kidnapping, slavery and sex trafficking.  Equally vehement interventions have been made on the other side by the biotechnology industry, pointing out the disastrous effect on medical research that any regulation of chimeras would have and stressing the need to avoid judgments based on a “non scientific basis,” such as the visual similarity between Vanna and a human.</p>
    <p>Hal and Vanna are fantasies, constructed for the purpose of this chapter.  But the problems that they portend for our moral and constitutional traditions are very, very real.  In fact, I would put the point more starkly: in the 21st century it is highly likely that American constitutional law will face <i>harder</i> challenges than those posed by Hal and Vanna.  Many readers will bridle at this point, skeptical of the science fiction overtones of such an imagined future.  How real is the science behind Hal and Vanna?  How likely are we to see something similar in the next 90 years?  Let me take each of these questions in turn.</p>
    <p>In terms of electronic artificial intelligence or AI, skeptics will rightly point to a history of overconfident predictions that the breakthrough was just around the corner.  In the 1960s, giants in the field such as Marvin Minsky and Herbert Simon were predicting “general purpose AI” or “machines ... capable ... of doing any work a man can do” by the nineteen eighties.<a href="#_ftn4" name="_ftnref4">[4]</a>  While huge strides were made in aspects of artificial intelligence－machine-aided translation, facial recognition, autonomous locomotion, expert systems and so on－general purpose AI remained out of reach.  Indeed, because the payoff from these more limited subsystems－which power everything from Google Translate to the recommendations of your TiVO or your Amazon account－was so rich, some researchers in the 1990s argued that the goal of general purpose AI was a snare and a delusion.  What was needed instead, they claimed, was a set of ever more powerful subspecialties－expert systems capable of performing discrete tasks extremely well, but without the larger goal of achieving consciousness, or passing the Turing Test.  There might be “machines capable of doing any work a man can do” but they would be <i>different</i> machines, with no ghost in the gears, no claim to a holistic consciousness.</p>
    <p>But the search for general purpose AI did not end in the ‘90s.  Indeed, if anything, the optimistic claims have become even more far reaching.  The buzzword among AI optimists now is “the singularity”－a sort of technological lift-off point, in which a combination of scientific and technical breakthroughs lead to an explosion of self-improving artificial intelligence coupled to a vastly improved ability to manipulate both our bodies and the external world through nanotechnology and genetic engineering.<a href="#_ftn5" name="_ftnref5">[5]</a>  The line on the graph of technological progress, they argue, would go vertical－or at least be impossible to predict using current tools－since for the first time we would have improvements not in technology alone, but in the intelligence that was creating new technology.  Intelligence itself would be transformed.  Once we had built machines smarter than ourselves－machines capable of building machines smarter than themselves－we would, by definition, be unable to predict the line that progress would take.</p>
    <p>To the uninitiated, this all sounds like a delightfully wacky fantasy, a high tech version of the rapture.  And in truth, some of the more enthusiastic odes to the singularity have an almost religious, chiliastic feel to them.  Further examination, though, shows that many AI optimists are not science fantasists, but respected computer scientists.  It is not unreasonable to note the steady progress in computing power and speed, in miniaturization and manipulation of matter on the nano-scale, in mapping the brain and cognitive processes, and so on.  What distinguishes the proponents of the singularity is not that their technological projections are by themselves so optimistic, but rather that they are predicting that the coming together of all these trends will produce a whole that is more than the sum of its parts.  There exists precedent for this kind of technological synchronicity.  There were personal computers in private hands from the early 1980s.  Some version of the Internet－running a packet-based network－existed from the 1950s or ‘60s.  The idea of hyperlinks was explored in the 70s and 80s.  But it was only the combination of all of them to form the World Wide Web that changed the world.  Yet if there is precedent for sudden dramatic technological advances on the basis of existing technologies, there is even more precedent for people predicting them wrongly, or not at all.</p>
    <p>Despite the humility induced by looking at overly rosy past predictions, many computer scientists, including some of those who are skeptics of the wilder forms of AI optimism, nevertheless believe that we will achieve Turing-capable artificial intelligence.  The reason is simple.  We are learning more and more about the neurological processes of the brain.  What we can understand, we can hope eventually to replicate:</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>Of all the hypotheses I've held during my 30-year career, this one in particular has been central to my research in robotics and artificial intelligence.  I, you, our family, friends, and dogs－we all are machines.  We are really sophisticated machines made up of billions and billions of biomolecules that interact according to well-defined, though not completely known, rules deriving from physics and chemistry.  The biomolecular interactions taking place inside our heads give rise to our intellect, our feelings, our sense of self.  Accepting this hypothesis opens up a remarkable possibility.  If we really are machines and if－this is a big if－we learn the rules governing our brains, then in principle there's no reason why we shouldn't be able to replicate those rules in, say, silicon and steel.  I believe our creation would exhibit genuine human-level intelligence, emotions, and even consciousness.<a href="#_ftn6" name="_ftnref6">[6]</a></p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">Those words come from Rodney Brooks, founder of MIT's Humanoid Robotics Group.  His article, written in a prestigious IEEE journal, is remarkable because he actually writes as skeptic of the claims put forward by the proponents of the singularity.  Brooks explains:</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>I do not claim that any specific assumption or extrapolation of theirs is faulty.  Rather, I argue that an artificial intelligence could evolve in a much different way.  In particular, I don't think there is going to be one single sudden technological “big bang” that springs an artificial general intelligence (AGI) into “life.”  Starting with the mildly intelligent systems we have today, machines will become gradually more intelligent, generation by generation.  The singularity will be a period, not an event.  This period will encompass a time when we will invent, perfect, and deploy, in fits and starts, ever more capable systems, driven not by the imperative of the singularity itself but by the usual economic and sociological forces.  Eventually, we will create truly artificial intelligences, with cognition and consciousness recognizably similar to our own.<a href="#_ftn7" name="_ftnref7">[7]</a></p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">How about Vanna?  Vanna herself is unlikely to be created simply because genetic technologists are not that stupid.  Nothing could scream more loudly “I am a technology out of control.  Please regulate me!”  But we are already making, and patenting, genetic chimeras－we have been doing so for more than twenty years.  We have spliced luminosity derived from fish into tomato plants.  We have invented geeps (goat sheep hybrids).  And we have created chimeras partly from human genetic material.  There are the patented onco-mice that form the basis of much cancer research to say nothing of Dr. Weissman's charming human-mice chimera with 100% human brain cells.  Chinese researchers reported in 2003 that they had combined rabbit eggs and human skin cells to produce what they claimed to be the first human chimeric embryos－which were then used as sources of stem cells.  And the processes go much further.  Here is a nice example from 2007:</p>
    <blockquote dir="ltr">
      <blockquote dir="ltr">
        <p>Scientists have created the world's first human-sheep chimera－which has the body of a sheep and half-human organs.  The sheep have 15 per cent human cells and 85 per cent animal cells－and their evolution brings the prospect of animal organs being transplanted into humans one step closer.  Professor Esmail Zanjani, of the University of Nevada, has spent seven years and £5 million perfecting the technique, which involves injecting adult human cells into a sheep's foetus.  He has already created a sheep liver which has a large proportion of human cells and eventually hopes to precisely match a sheep to a transplant patient, using their own stem cells to create their own flock of sheep.  The process would involve extracting stem cells from the donor's bone marrow and injecting them into the peritoneum of a sheep's foetus.  When the lamb is born, two months later, it would have a liver, heart, lungs and brain that are partly human and available for transplant.<a href="#_ftn8" name="_ftnref8">[8]</a></p>
      </blockquote>
    </blockquote>
    <p class="bodytextfirstpar">Given this kind of scientific experimentation and development in both genetics and computer science, I think that we can in fact turn the question of Hal’s and Vanna’s plausibility back on the questioner.  This essay was written in 2010.  Think of the level of technological progress in 1910, the equivalent point during the last century.  Then think of how science and technology progressed by the year 2000.  There are good reasons to believe that the rate of technological progress in this century will be <i>faster</i> than in the last century.  Given what we have already done in the areas of both artificial intelligence research and genetic engineering, is it really credible to suppose that the next 90 years will not present us with entities stranger and more challenging to our moral intuitions than Hal and Vanna?</p>
    <p>My point is a simple one.  In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings.  They may look like human beings, but have a genome that is very different.  Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity.  They may be physically dissimilar to all biological life forms－computer-based intelligences, for example－yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination.  They may strongly resemble other species, and yet be genetically modified in ways that boost the characteristics we regard as distinctively human－such as the ability to use human language and to solve problems that, today, only humans can solve.  They may have the ability to feel pain, to make something that we could call plans, to solve problems that we could not, and even to reproduce.  (Some would argue that non-human animals already possess all of those capabilities, and look how we treat them.)  They may use language to make legal claims on us, as Hal does, or be mute and yet have others who intervene claiming to represent them.  Their creators may claim them as property, perhaps even patented property, while critics level charges of slavery.  In some cases, they may pose threats as well as jurisprudential challenges; the theme of the creation which turns on its creators runs from Frankenstein to Skynet, the rogue computer network from <i>The Terminator.  </i>Yet repression, too may breed a violent reaction: the story of the enslaved un-person who, denied recourse by the state, redeems his personhood in blood may not have ended with Toussaint L'Ouverture.  How will, and how should, constitutional law meet these challenges? </p>
    <div>
      
<br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            [1] </a>
          D. Scott Bennett, “Chimera and the Continuum of Humanity: Erasing the Line of Constitutional Personhood,” <i>Emory Law Journal</i> 55, no. 2 (2006): 348–49.
<br>
          <a href="#_ftnref2" name="_ftn2">
            [2] </a>
          
            <i>See</i> 
          <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~loebner03.hamill.co.uk/docs/LPC%20Official%20Rules%20v2.0.pdf">
            http://loebner03.hamill.co.uk/docs/LPC%20Official%20Rules%20v2.0.pdf </a>
          (accessed Jan. 26, 2011).
<br>
          <a href="#_ftnref3" name="_ftn3">
            [3] </a>
          1077 <i>Official Gazette Patent Office</i> 24 (April 7, 1987)(emphasis added).
<br>
          <a href="#_ftnref4" name="_ftn4">
            [4] </a>
          Herbert A. Simon, <i>The Shape of Automation for Men and Management</i> 96 (New York: Harper &amp; Row, 1965). <a href="#_ftnref5" name="_ftn5">
<br>[5] </a><i>See, for example</i>, Raymond Kurzweil, <i>The Singularity Is Near</i> (New York: Viking, 2005).
<br><a href="#_ftnref6" name="_ftn6">[6] </a>Rodney Brooks, “I, Rodney Brooks, Am a Robot,” <i>IEEE Spectrum</i> 45, no. 6 (June 2008): 71.
<br><a href="#_ftnref7" name="_ftn7">[7] </a><i>Id.</i> at 72.
<br><a href="#_ftnref8" name="_ftn8">[8] </a>Claudia Joseph, “Now Scientists Create a Sheep that's 15% Human,” <i>Daily Mail</i> Online, March 27, 2007, available at <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.dailymail.co.uk/news/article-444436/Now-scientists-create-sheep-thats-15-human.html">http://www.dailymail.co.uk/news/article-444436/Now-scientists-create-sheep-thats-15-human.html </a>, accessed January 27, 2011. </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2011/3/09-personhood-boyle/0309_personhood_boyle.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>James Boyle</li>
		</ul>
	</div><div>
		Image Source: Chad Baker
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487890/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487890/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fb%2fbp%2520bt%2fbrain001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487890/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2011/02/03-neuroscience-morse?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{80877AB4-DEE2-49B5-BEFC-491C91DA6AC8}</guid><link>http://webfeeds.brookings.edu/~/65487891/0/brookingsrss/series/futureoftheconstitution~Neuroscience-and-the-Future-of-Personhood-and-Responsibility</link><title>Neuroscience and the Future of Personhood and Responsibility </title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/n/na%20ne/neuroscience002_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>INTRODUCTION</b></p><p><p>Collera, a 28 year old man with a life-long history of aggressive behavior, including assaultive conduct and abusive verbal behavior, is driving his large SUV behind a slow moving vehicle on a narrow road with no room to pass. He honks and honks, but the driver in front neither speeds up nor pulls off the road to let Collera pass. Collera starts to curse vehemently and to pull dangerously close to the slower vehicle. Collera’s passenger warns that he is taking a very serious risk.  Collera finally announces in a fury that he’s going to kill the [expletive deleted] in front. He allows his vehicle to drop back a bit, and then he floors the SUV’s gas pedal, crashing into the slower vehicle at great speed. Neither he nor his passenger is hurt, but the driver of the slower vehicle is killed.</p>
    <p>An evaluation of Collera after the killing discloses the following.  A functional brain image that measures brain activation discloses that Collera has a type of neurophysiological activity in his right frontal cortex that is associated with poor behavioral self-regulation.<a href="#_ftn1" name="_ftnref1">[1]</a>  Collera’s life history includes a history of severe abuse. It is known that such abuse is strongly associated with later antisocial conduct if the person also has a genetic profile that affects particular neurotransmitter levels.<a href="#_ftn2" name="_ftnref2">[2]</a> Collera indeed has the genetic profile and the associated neurotransmitter levels.</p>
    <p>How should the law respond to people like Collera? Do we treat him, as we now do, as an acting agent who is properly subject to moral assessment and potential liability to just punishment? If so, how does the evaluation bear on his responsibility and future dangerousness? It appears from the limited facts that he has no specific doctrinal defense to murder.  In deciding what the just punishment might be, however, how should the information from the evaluation be used?  In the alternative, suppose Collera is simply a “victim of neuronal circumstances,” as some would claim. Or suppose that although we still think of him as an agent, our prediction and control technology has immeasurably advanced. What should be the proper response?</p>
    <p>Imagine that this takes place in the future, when we will have much better information about the biologically causal variables, especially neuroscientific and genetic factors, that produce all dangerous behavior and not just seemingly extreme cases like Collera’. The description of Collera’s evaluation results makes no mention of disease or disorder. It simply reports a number of neuroscientific, genetic and gene-by-environment interaction variables that played an apparently causal role in producing Collera’s behavior and that might have helped us predict it. Will jurisprudence that respects agency, which enhances the dignity, liberty and autonomy of all citizens, survive in a future in which neuroscience and genetics dominate our thinking about personhood and responsibility.  Will we abandon the concepts of criminal, crime, responsibility, blame, and punishment, and replace them by concepts such as “dangerous behavior” and “preventive control”? Will people in this brave new world be treated simply as biological mechanisms and will harmdoing be characterized simply as one mechanistic output of the system? As The Economist has warned: “Genetics may yet threaten privacy, kill autonomy, make society homogeneous, and gut the concept of human nature. But neuroscience could do all those things first.”<a href="#_ftn3" name="_ftnref3">[3]</a></p>
    <p>The law in our liberal democracy responds to the need to restrain dangerous people like Collera by what I have termed “desert-disease” jurisprudence.<a href="#_ftn4" name="_ftnref4">[4]</a>  As a consequence of taking people seriously as people, as potential moral agents, we believe that it is crucial to cabin the potentially broad power of the state to deprive people of liberty. With rare exceptions, the state may only restrain a citizen if that citizen has been fairly convicted of crime and deserves the punishment imposed. If a citizen has not committed a crime but appears dangerous and not responsible for his or her dangerousness—usually as a result of mental disorder or other diseases that impair rationality—the citizen may be civilly committed. People who are simply dangerous but who have committed no crime and who are responsible agents cannot be restrained. The normative basis of desert-disease jurisprudence is that it enhances liberty and autonomy by leaving people free to pursue their projects unless an agent responsibly commits a crime or unless through no fault of his own the agent is non-responsibly dangerous. In the latter case, the agent’s rationality is impaired and the usual presumption in favor of liberty and autonomy yields to the need for societal protection and preventive detention and involuntary treatment may be warranted. </p>
    <p>The law’s concern with justifying and protecting liberty and autonomy is deeply rooted in the conception of rational personhood.  Human beings are part of the physical universe and subject to the laws of that universe, but, as far as we know, we are the only creatures on earth capable of acting fully for reasons and self-consciously. Only human beings are genuinely reason-responsive and live in societies that are in part governed by behavior-guiding norms. Only human beings have projects that are essential to living a good life. Only human beings have expectations of each other and require justification for interference in each other's lives that will prevent the pursuit of projects and seeking the good. We are the only creatures to whom the questions “Why did you do that?” and “How <i>should </i>we behave” are properly addressed, and only human beings hurt and kill each other in response to the answers to such questions. As a consequence of this view of ourselves, human beings typically have developed rich sets of interpersonal, social attitudes, practices, and institutions, including those that deal with the risk we present to each other. Among these are the practice of holding others morally and legally responsible, which depends on our attitudes and expectations about deserved praise and blame, and our practices and institutions that express those attitudes, such as reward and punishment. </p>
    <p>There is little evidence at present that neuroscience, especially functional imaging, and genetic evidence are being introduced routinely in criminal cases outside of capital sentencing proceedings. It may well happen in the near future, however, especially as the technology becomes more broadly available and less expensive. So it’s worth considering in detail neuroscience’s radical challenge to responsibility, which treats people as “victims of neuronal circumstances” or the like.  If this view of personhood is correct, it would indeed undermine all ordinary conceptions of responsibility and even the coherence of law itself. </p>
    <div>
      <br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            [1]
          </a>
           Tiffany W. Chow &amp; Jeffrey L. Cummings, <i>Frontal-Subcortical Circuits</i>, in Bruce L. Miller &amp; Jeffrey L. Cummings (eds.), The Human Frontal Lobes: Functions and Disorders (2d Ed.)  25, 27-31 (2007).  Damage to this region is also associated with antisocial behavior. Steven W. Anderson et al, <i>Impairment of social</i><i> and moral behavior related to early damage in human prefrontal cortex.</i> 2 Nat. Neurosci.<i> </i>1032 (1999); R. James Blair &amp; Lisa Cipolotti, <i>Impaired social response reversal: a case of acquired sociopathy</i>, 123 Brain<i> </i>1122 (2000); Jeffrey L. Saver &amp; Antonio R. Damasio, <i>Preserved access and processing of social knowledge in</i><i> a patient with acquired sociopathy due to ventromedial frontal damage</i>, 29  Neuropsychologia<i> </i>1241 (1991).  Let us assume, however, that Collera is not obviously damaged.<br>
          <a href="#_ftnref2" name="_ftn2">
            [2]
          </a>
           Avshalom Caspi et al, <i>Role of genotype in the cycle of violence in maltreated children</i>, 297  Science 851 (2002).<br>
          <a href="#_ftnref3" name="_ftn3">
            [3]
          </a>
          
            <i>The Ethics of Brain Science: Open Your Mind</i>, Economist, May 25, 2002, at 77.<br>
          <a href="#_ftnref4" name="_ftn4">
            [4]
          </a>
           Stephen J. Morse, <i>Neither Desert Nor Disease</i>, 5 Legal Theory 265, 267-70 (1999).
        </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2011/2/03-neuroscience-morse/0203_neuroscience_morse.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Stephen J. Morse</li>
		</ul>
	</div><div>
		Image Source: Arthur Toga/UCLA
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487891/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fn%2fna%2520ne%2fneuroscience002_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Thu, 03 Feb 2011 00:00:00 -0500</pubDate><dc:creator>Stephen J. Morse</dc:creator>
<itunes:summary> 
INTRODUCTION
Collera, a 28 year old man with a life-long history of aggressive behavior, including assaultive conduct and abusive verbal behavior, is driving his large SUV behind a slow moving vehicle on a narrow road with no room to pass. He honks and honks, but the driver in front neither speeds up nor pulls off the road to let Collera pass. Collera starts to curse vehemently and to pull dangerously close to the slower vehicle. Collera&#x2019;s passenger warns that he is taking a very serious risk.&#xA0; Collera finally announces in a fury that he&#x2019;s going to kill the [expletive deleted] in front. He allows his vehicle to drop back a bit, and then he floors the SUV&#x2019;s gas pedal, crashing into the slower vehicle at great speed. Neither he nor his passenger is hurt, but the driver of the slower vehicle is killed. 
An evaluation of Collera after the killing discloses the following.&#xA0; A functional brain image that measures brain activation discloses that Collera has a type of neurophysiological activity in his right frontal cortex that is associated with poor behavioral self-regulation.[1]&#xA0; Collera&#x2019;s life history includes a history of severe abuse. It is known that such abuse is strongly associated with later antisocial conduct if the person also has a genetic profile that affects particular neurotransmitter levels.[2] Collera indeed has the genetic profile and the associated neurotransmitter levels. 
How should the law respond to people like Collera? Do we treat him, as we now do, as an acting agent who is properly subject to moral assessment and potential liability to just punishment? If so, how does the evaluation bear on his responsibility and future dangerousness? It appears from the limited facts that he has no specific doctrinal defense to murder.&#xA0; In deciding what the just punishment might be, however, how should the information from the evaluation be used?&#xA0; In the alternative, suppose Collera is simply a &#8220;victim of neuronal circumstances,&#8221; as some would claim. Or suppose that although we still think of him as an agent, our prediction and control technology has immeasurably advanced. What should be the proper response? 
Imagine that this takes place in the future, when we will have much better information about the biologically causal variables, especially neuroscientific and genetic factors, that produce all dangerous behavior and not just seemingly extreme cases like Collera&#x2019;. The description of Collera&#x2019;s evaluation results makes no mention of disease or disorder. It simply reports a number of neuroscientific, genetic and gene-by-environment interaction variables that played an apparently causal role in producing Collera&#x2019;s behavior and that might have helped us predict it. Will jurisprudence that respects agency, which enhances the dignity, liberty and autonomy of all citizens, survive in a future in which neuroscience and genetics dominate our thinking about personhood and responsibility.&#xA0; Will we abandon the concepts of criminal, crime, responsibility, blame, and punishment, and replace them by concepts such as &#8220;dangerous behavior&#8221; and &#8220;preventive control&#8221;? Will people in this brave new world be treated simply as biological mechanisms and will harmdoing be characterized simply as one mechanistic output of the system? As The Economist has warned: &#8220;Genetics may yet threaten privacy, kill autonomy, make society homogeneous, and gut the concept of human nature. But neuroscience could do all those things first.&#8221;[3] 
The law in our liberal democracy responds to the need to restrain dangerous people like Collera by what I have termed &#8220;desert-disease&#8221; jurisprudence.[4]&#xA0; As a consequence of taking people seriously as people, as potential moral agents, we believe that it is crucial to cabin the potentially broad power of the state to deprive people of liberty. With rare exceptions, the ... </itunes:summary>
<itunes:subtitle>INTRODUCTION
Collera, a 28 year old man with a life-long history of aggressive behavior, including assaultive conduct and abusive verbal behavior, is driving his large SUV behind a slow moving vehicle on a narrow road with no room to pass.</itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/n/na%20ne/neuroscience002_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>INTRODUCTION</b></p><p><p>Collera, a 28 year old man with a life-long history of aggressive behavior, including assaultive conduct and abusive verbal behavior, is driving his large SUV behind a slow moving vehicle on a narrow road with no room to pass. He honks and honks, but the driver in front neither speeds up nor pulls off the road to let Collera pass. Collera starts to curse vehemently and to pull dangerously close to the slower vehicle. Collera’s passenger warns that he is taking a very serious risk.  Collera finally announces in a fury that he’s going to kill the [expletive deleted] in front. He allows his vehicle to drop back a bit, and then he floors the SUV’s gas pedal, crashing into the slower vehicle at great speed. Neither he nor his passenger is hurt, but the driver of the slower vehicle is killed.</p>
    <p>An evaluation of Collera after the killing discloses the following.  A functional brain image that measures brain activation discloses that Collera has a type of neurophysiological activity in his right frontal cortex that is associated with poor behavioral self-regulation.<a href="#_ftn1" name="_ftnref1">[1]</a>  Collera’s life history includes a history of severe abuse. It is known that such abuse is strongly associated with later antisocial conduct if the person also has a genetic profile that affects particular neurotransmitter levels.<a href="#_ftn2" name="_ftnref2">[2]</a> Collera indeed has the genetic profile and the associated neurotransmitter levels.</p>
    <p>How should the law respond to people like Collera? Do we treat him, as we now do, as an acting agent who is properly subject to moral assessment and potential liability to just punishment? If so, how does the evaluation bear on his responsibility and future dangerousness? It appears from the limited facts that he has no specific doctrinal defense to murder.  In deciding what the just punishment might be, however, how should the information from the evaluation be used?  In the alternative, suppose Collera is simply a “victim of neuronal circumstances,” as some would claim. Or suppose that although we still think of him as an agent, our prediction and control technology has immeasurably advanced. What should be the proper response?</p>
    <p>Imagine that this takes place in the future, when we will have much better information about the biologically causal variables, especially neuroscientific and genetic factors, that produce all dangerous behavior and not just seemingly extreme cases like Collera’. The description of Collera’s evaluation results makes no mention of disease or disorder. It simply reports a number of neuroscientific, genetic and gene-by-environment interaction variables that played an apparently causal role in producing Collera’s behavior and that might have helped us predict it. Will jurisprudence that respects agency, which enhances the dignity, liberty and autonomy of all citizens, survive in a future in which neuroscience and genetics dominate our thinking about personhood and responsibility.  Will we abandon the concepts of criminal, crime, responsibility, blame, and punishment, and replace them by concepts such as “dangerous behavior” and “preventive control”? Will people in this brave new world be treated simply as biological mechanisms and will harmdoing be characterized simply as one mechanistic output of the system? As The Economist has warned: “Genetics may yet threaten privacy, kill autonomy, make society homogeneous, and gut the concept of human nature. But neuroscience could do all those things first.”<a href="#_ftn3" name="_ftnref3">[3]</a></p>
    <p>The law in our liberal democracy responds to the need to restrain dangerous people like Collera by what I have termed “desert-disease” jurisprudence.<a href="#_ftn4" name="_ftnref4">[4]</a>  As a consequence of taking people seriously as people, as potential moral agents, we believe that it is crucial to cabin the potentially broad power of the state to deprive people of liberty. With rare exceptions, the state may only restrain a citizen if that citizen has been fairly convicted of crime and deserves the punishment imposed. If a citizen has not committed a crime but appears dangerous and not responsible for his or her dangerousness—usually as a result of mental disorder or other diseases that impair rationality—the citizen may be civilly committed. People who are simply dangerous but who have committed no crime and who are responsible agents cannot be restrained. The normative basis of desert-disease jurisprudence is that it enhances liberty and autonomy by leaving people free to pursue their projects unless an agent responsibly commits a crime or unless through no fault of his own the agent is non-responsibly dangerous. In the latter case, the agent’s rationality is impaired and the usual presumption in favor of liberty and autonomy yields to the need for societal protection and preventive detention and involuntary treatment may be warranted. </p>
    <p>The law’s concern with justifying and protecting liberty and autonomy is deeply rooted in the conception of rational personhood.  Human beings are part of the physical universe and subject to the laws of that universe, but, as far as we know, we are the only creatures on earth capable of acting fully for reasons and self-consciously. Only human beings are genuinely reason-responsive and live in societies that are in part governed by behavior-guiding norms. Only human beings have projects that are essential to living a good life. Only human beings have expectations of each other and require justification for interference in each other's lives that will prevent the pursuit of projects and seeking the good. We are the only creatures to whom the questions “Why did you do that?” and “How <i>should </i>we behave” are properly addressed, and only human beings hurt and kill each other in response to the answers to such questions. As a consequence of this view of ourselves, human beings typically have developed rich sets of interpersonal, social attitudes, practices, and institutions, including those that deal with the risk we present to each other. Among these are the practice of holding others morally and legally responsible, which depends on our attitudes and expectations about deserved praise and blame, and our practices and institutions that express those attitudes, such as reward and punishment. </p>
    <p>There is little evidence at present that neuroscience, especially functional imaging, and genetic evidence are being introduced routinely in criminal cases outside of capital sentencing proceedings. It may well happen in the near future, however, especially as the technology becomes more broadly available and less expensive. So it’s worth considering in detail neuroscience’s radical challenge to responsibility, which treats people as “victims of neuronal circumstances” or the like.  If this view of personhood is correct, it would indeed undermine all ordinary conceptions of responsibility and even the coherence of law itself. </p>
    <div>
      
<br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            [1]
          </a>
           Tiffany W. Chow &amp; Jeffrey L. Cummings, <i>Frontal-Subcortical Circuits</i>, in Bruce L. Miller &amp; Jeffrey L. Cummings (eds.), The Human Frontal Lobes: Functions and Disorders (2d Ed.)  25, 27-31 (2007).  Damage to this region is also associated with antisocial behavior. Steven W. Anderson et al, <i>Impairment of social</i><i> and moral behavior related to early damage in human prefrontal cortex.</i> 2 Nat. Neurosci.<i> </i>1032 (1999); R. James Blair &amp; Lisa Cipolotti, <i>Impaired social response reversal: a case of acquired sociopathy</i>, 123 Brain<i> </i>1122 (2000); Jeffrey L. Saver &amp; Antonio R. Damasio, <i>Preserved access and processing of social knowledge in</i><i> a patient with acquired sociopathy due to ventromedial frontal damage</i>, 29  Neuropsychologia<i> </i>1241 (1991).  Let us assume, however, that Collera is not obviously damaged.
<br>
          <a href="#_ftnref2" name="_ftn2">
            [2]
          </a>
           Avshalom Caspi et al, <i>Role of genotype in the cycle of violence in maltreated children</i>, 297  Science 851 (2002).
<br>
          <a href="#_ftnref3" name="_ftn3">
            [3]
          </a>
          
            <i>The Ethics of Brain Science: Open Your Mind</i>, Economist, May 25, 2002, at 77.
<br>
          <a href="#_ftnref4" name="_ftn4">
            [4]
          </a>
           Stephen J. Morse, <i>Neither Desert Nor Disease</i>, 5 Legal Theory 265, 267-70 (1999).
        </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2011/2/03-neuroscience-morse/0203_neuroscience_morse.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Stephen J. Morse</li>
		</ul>
	</div><div>
		Image Source: Arthur Toga/UCLA
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487891/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487891/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fn%2fna%2520ne%2fneuroscience002_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487891/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2011/01/27-internet-treaty-zittrain?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{734BA1E4-A041-4B0A-996F-B9052B25B5FC}</guid><link>http://webfeeds.brookings.edu/~/65487892/0/brookingsrss/series/futureoftheconstitution~A-Mutual-Aid-Treaty-for-the-Internet</link><title>A Mutual Aid Treaty for the Internet</title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/i/ik%20io/internet_handshake001_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>Introduction</b></p><p><p class="bodytextfirstpar">By 2030 we will have all of humanity’s books online.  Google’s ambitious book scanning project – or something like it – will by then have generated high quality, searchable scans of nearly every book available in the world.  These scans will be available online to library partners and individual users with certain constraints—what those will be, we do not know yet.  It will be a library in the cloud, one that is far larger than any real world library could hope to be.  It will make no sense for a library to store thousands of physical books in its basement.  Rather, under a Google Books plan, there will be one master copy of the book in Google’s possession.<a href="#_ftn1" name="_ftnref1"><sup>[1]</sup></a><sup> </sup> The library partners display it and access it according to particular privileges.  A user can access it from anywhere.  One master book shared among many drastically lowers the costs of updates – or censorship.  For example, if one book in the system contains copyright-infringing material, the rights-holder can get a court order requiring the infringing pages of the book to be deleted from the Google server.  Google has no choice but to comply, at least as long as it continues to have tangible interests within the country demanding a change.  This vulnerability affects every text distributed through the Google platform.  Anyone who does not own a physical copy of the book—and a means to search it to verify its integrity—will now lack access to that material.  Add in orders arising from perceived defamation or any other cause of action, and holes begin to appear in the historical record in a way they did not before.</p>
    <p>Some people – and I am in this camp – are alarmed by this prospect; others regard it as important but not urgent. Still others see this as a feature, not a bug. What’s the constitutional problem, after all?  Court orders in the U.S. are subject to judicial review (indeed, they issue from judges), so can’t they be made to harmonize with the First Amendment?  Not so easily.  Current constitutional doctrine has little to say about redactions or impoundment of material after it’s had its day in court.  What has protected such material from thoroughgoing and permanent erasure is the inherent leakiness of a distributed system where books are found everywhere: in libraries, bookstores, and people’s homes.  By centralizing (and, to be sure, making more efficient) the storage of content, we are creating a world in which all copies of once-censored books like <i><i>Candide</i></i><i>, </i><i><i>The Call of the Wild</i></i>, and <i><i>Ulysses</i></i> could have been permanently destroyed at the time of the censoring and could not be studied or enjoyed after subsequent decision-makers lifted the ban.<a href="#_ftn2" name="_ftnref2"><sup>[2]</sup></a>  Worse, content that may be every bit as important—but not as famous—can be quietly redacted or removed without anyone’s even noticing. Orders need only be served on a centralized provider, rather than on one bookstore or library at a time.</p>
    <p>The systems to make this happen are being designed and implemented right now, and can be fully dominant over the decades this volume asks us to chart.  One helpful thought experiment flows from an incident that could not have been invented better than it actually happened.  Somebody offers, through Amazon, a Kindle version of <i>1984</i> by George Orwell.<a href="#_ftn3" name="_ftnref3"><sup>[3]</sup></a><sup> </sup> People buy it.  Later, Amazon has reason to think there is a copyright issue that was not cleared by the source who put it on Amazon.  Amazon panics and sends a signal that actually deletes <i>1984</i> off of all the Kindles.  It is as if the user never bought <i>1984</i>.  It is current, not future, technology that makes it possible.  The only reason this isn’t a major issue is because other copies of <i>1984 </i>are so readily available – precisely because digitally centralized copies have yet to fully take root.  This is not literally cloud computing; for the period of time the user possessed <i>1984</i>, it technically resided physically on his or her Kindle.  But because it is not the user’s to copy or to process, and it is Amazon’s power to reach in and revise or manipulate, it is as good as a Google Books configuration—or, in this case, as bad.</p>
    <p>By 2030, a majority of global communications, commerce, and information storage will take place online. Much of this activity will be routed through a small set of corporate, governmental and institutional actors. For much but not all of our online history, a limited number of corporate actors have framed the way people interact online.  In the 1990s, it was the online service providers such as Prodigy and AOL that regulated our nascent digital interactions. Today, most people have direct access to the Web, but now their online lives are described by consolidating corporate search engines, content providers, and social networking sites. </p>
    <p>With greater online centralization comes greater vulnerability, whether the centralization is public or private. Corporations are discrete entities, subject to pressures from repressive governments and criminal or terrorist threats. If Google’s services were to go offline tomorrow, the lives of millions of people would be disrupted. </p>
    <p>This risk grows more acute as both the importance and centralization of online services increase. The Internet already occupies a vital space in public and private life. By 2030, that place will only be more vital. Threats to cybersecurity will thus present threats to human rights and civil liberties. Disruptions in access to cloud-hosted services will cut off the primary and perhaps the only socially safe mode of communication for journalists, political activists, and ordinary citizens in countries around the world. Corrupt governments need not bother producing their own propaganda if selective Internet filtering can provide just as sure a technique of controlling citizens’ access to and perception of news and other information. This is the Fort Knox problem: a single bottleneck in the path to data, or a single logical trove where we put all our eggs in one basket.</p>
    <p>This scenario has implications here for both free speech and cybersecurity.  The Fort Knox mentality exposes vulnerable speech to unilateral and obliterating censorship: losing the inherent leakiness of the present model means we lose the benefits of the redundancies it creates.  These redundancies protect our civil liberty and security in ways as important as a constitutional right scheme.  Indeed, the Constitution is interpreted with such reality in mind.  The ease with which an order can be upheld to impound copyright infringing materials, or to destroy defamatory texts, can only be understood in the context of how difficult such actions are to undertake in the world of 2010.  Those difficulties make such actions rare and expensive.  Should the difficulties in censorship diminish or evaporate, there is no guarantee that compensating protections would be enacted by Congress or fashioned by judges.</p>
    <p>Moreover, threats to free speech online come not only from governments wishing to censor through the mechanisms of law but from <i>anyone</i> wishing to censor through the mechanisms of cyberattack, such as denial of service.  If a site has unpopular or sensitive content it can find itself brought down – forced to either abandon its message or seek shelter under the umbrella of a well-protected corporate information hosting apparatus.  Such companies may charge accordingly for their services – or, fearing that they will be swamped by a retargeted attack, refuse to host at all.  This is why the more traditional government censorship configurations are best understood with a cybersecurity counterpart.</p>
    <p>That which appears safer in the short term for cybersecurity – putting all our bits in the hands of a few centralized corporations – makes traditional censorship easier.</p>
    <p>The key to solving the Fort Knox problem is to make the current decentralized Web a more robust one.  This can be done by reforging the technological relationships sites and services have with each other on the Web, drawing conceptually from mutual aid treaties among states in the real world., Mutual aid lets us envision a new socially- and technologically-based system of redundancy and security. </p>
    <div>
      <br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            
              <sup>[1]</sup>
            
          </a>
          
            <i>
              <sup> </sup>See</i> <i>generally</i> Google Books Settlement Agreement, 
          <a href="http://books.google.com/‌googlebooks/‌agreement/">
            http://books.google.com/‌googlebooks/‌agreement/
          </a>
           (last visited Apr. 7, 2010).<br>
          <a href="#_ftnref2" name="_ftn2">
            
              <sup>[2]</sup>
            
          </a>
          
            <sup> </sup>See John M. Ockerbloom, Books Banned Online, 
          <a href="http://onlinebooks.library.upenn.edu/banned-books.html">
            http://onlinebooks.library.upenn.edu/banned-books.html
          </a>
          ; Jonathan Zittrain, <i>The Future of the Internet – And How to Stop It </i>(Yale: 2008), p. 116.<br>
          <a href="#_ftnref3" name="_ftn3">
            
              <sup>[3]</sup>
            
          </a>
          
            <i>
              <sup> </sup>See</i> Brad Stone, <i>Amazon Erases Two Classics from Kindle.  (One Is ‘1984.’)</i>, N.Y. Times, July 18, 2009, at B1.
        </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2011/1/27-internet-treaty-zittrain/0127_internet_treaty_zittrain.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Jonathan Zittrain</li>
		</ul>
	</div><div>
		Image Source: Reza Estakhrian
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487892/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fi%2fik%2520io%2finternet_handshake001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Thu, 27 Jan 2011 14:46:00 -0500</pubDate><dc:creator>Jonathan Zittrain</dc:creator>
<itunes:summary> 
Introduction
By&#xA0;2030 we will have all of humanity&#x2019;s books online.&#xA0; Google&#x2019;s ambitious book scanning project &#x2013; or something like it &#x2013; will by then have generated high quality, searchable scans of nearly every book available in the world.&#xA0; These scans will be available online to library partners and individual users with certain constraints&#x2014;what those will be, we do not know yet.&#xA0; It will be a library in the cloud, one that is far larger than any real world library could hope to be.&#xA0; It will make no sense for a library to store thousands of physical books in its basement.&#xA0; Rather, under a Google Books plan, there will be one master copy of the book in Google&#x2019;s possession.[1]&#xA0; The library partners display it and access it according to particular privileges.&#xA0; A user can access it from anywhere.&#xA0; One master book shared among many drastically lowers the costs of updates &#x2013; or censorship.&#xA0; For example, if one book in the system contains copyright-infringing material, the rights-holder can get a court order requiring the infringing pages of the book to be deleted from the Google server.&#xA0; Google has no choice but to comply, at least as long as it continues to have tangible interests within the country demanding a change.&#xA0; This vulnerability affects every text distributed through the Google platform.&#xA0; Anyone who does not own a physical copy of the book&#x2014;and a means to search it to verify its integrity&#x2014;will now lack access to that material.&#xA0; Add in orders arising from perceived defamation or any other cause of action, and holes begin to appear in the historical record in a way they did not before. 
Some people &#x2013; and I am in this camp &#x2013; are alarmed by this prospect; others regard it as important but not urgent. Still others see this as a feature, not a bug. What&#x2019;s the constitutional problem, after all?&#xA0; Court orders in the U.S. are subject to judicial review (indeed, they issue from judges), so can&#x2019;t they be made to harmonize with the First Amendment?&#xA0; Not so easily.&#xA0; Current constitutional doctrine has little to say about redactions or impoundment of material after it&#x2019;s had its day in court.&#xA0; What has protected such material from thoroughgoing and permanent erasure is the inherent leakiness of a distributed system where books are found everywhere: in libraries, bookstores, and people&#x2019;s homes.&#xA0; By centralizing (and, to be sure, making more efficient) the storage of content, we are creating a world in which all copies of once-censored books like Candide, The Call of the Wild, and Ulysses could have been permanently destroyed at the time of the censoring and could not be studied or enjoyed after subsequent decision-makers lifted the ban.[2]&#xA0; Worse, content that may be every bit as important&#x2014;but not as famous&#x2014;can be quietly redacted or removed without anyone&#x2019;s even noticing. Orders need only be served on a centralized provider, rather than on one bookstore or library at a time. 
The systems to make this happen are being designed and implemented right now, and can be fully dominant over the decades this volume asks us to chart.&#xA0; One helpful thought experiment flows from an incident that could not have been invented better than it actually happened.&#xA0; Somebody offers, through Amazon, a Kindle version of 1984 by George Orwell.[3]&#xA0; People buy it.&#xA0; Later, Amazon has reason to think there is a copyright issue that was not cleared by the source who put it on Amazon.&#xA0; Amazon panics and sends a signal that actually deletes 1984 off of all the Kindles.&#xA0; It is as if the user never bought 1984.&#xA0; It is current, not future, technology that makes it possible.&#xA0; The only reason this isn&#x2019;t a major issue is because other copies of 1984 are so readily available &#x2013; precisely ... </itunes:summary>
<itunes:subtitle>Introduction
By&#xA0;2030 we will have all of humanity&#x2019;s books online.&#xA0; Google&#x2019;s ambitious book scanning project &#x2013; or something like it &#x2013; will by then have generated high quality, searchable scans of nearly every ... </itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/i/ik%20io/internet_handshake001_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>Introduction</b></p><p><p class="bodytextfirstpar">By 2030 we will have all of humanity’s books online.  Google’s ambitious book scanning project – or something like it – will by then have generated high quality, searchable scans of nearly every book available in the world.  These scans will be available online to library partners and individual users with certain constraints—what those will be, we do not know yet.  It will be a library in the cloud, one that is far larger than any real world library could hope to be.  It will make no sense for a library to store thousands of physical books in its basement.  Rather, under a Google Books plan, there will be one master copy of the book in Google’s possession.<a href="#_ftn1" name="_ftnref1"><sup>[1]</sup></a><sup> </sup> The library partners display it and access it according to particular privileges.  A user can access it from anywhere.  One master book shared among many drastically lowers the costs of updates – or censorship.  For example, if one book in the system contains copyright-infringing material, the rights-holder can get a court order requiring the infringing pages of the book to be deleted from the Google server.  Google has no choice but to comply, at least as long as it continues to have tangible interests within the country demanding a change.  This vulnerability affects every text distributed through the Google platform.  Anyone who does not own a physical copy of the book—and a means to search it to verify its integrity—will now lack access to that material.  Add in orders arising from perceived defamation or any other cause of action, and holes begin to appear in the historical record in a way they did not before.</p>
    <p>Some people – and I am in this camp – are alarmed by this prospect; others regard it as important but not urgent. Still others see this as a feature, not a bug. What’s the constitutional problem, after all?  Court orders in the U.S. are subject to judicial review (indeed, they issue from judges), so can’t they be made to harmonize with the First Amendment?  Not so easily.  Current constitutional doctrine has little to say about redactions or impoundment of material after it’s had its day in court.  What has protected such material from thoroughgoing and permanent erasure is the inherent leakiness of a distributed system where books are found everywhere: in libraries, bookstores, and people’s homes.  By centralizing (and, to be sure, making more efficient) the storage of content, we are creating a world in which all copies of once-censored books like <i><i>Candide</i></i><i>, </i><i><i>The Call of the Wild</i></i>, and <i><i>Ulysses</i></i> could have been permanently destroyed at the time of the censoring and could not be studied or enjoyed after subsequent decision-makers lifted the ban.<a href="#_ftn2" name="_ftnref2"><sup>[2]</sup></a>  Worse, content that may be every bit as important—but not as famous—can be quietly redacted or removed without anyone’s even noticing. Orders need only be served on a centralized provider, rather than on one bookstore or library at a time.</p>
    <p>The systems to make this happen are being designed and implemented right now, and can be fully dominant over the decades this volume asks us to chart.  One helpful thought experiment flows from an incident that could not have been invented better than it actually happened.  Somebody offers, through Amazon, a Kindle version of <i>1984</i> by George Orwell.<a href="#_ftn3" name="_ftnref3"><sup>[3]</sup></a><sup> </sup> People buy it.  Later, Amazon has reason to think there is a copyright issue that was not cleared by the source who put it on Amazon.  Amazon panics and sends a signal that actually deletes <i>1984</i> off of all the Kindles.  It is as if the user never bought <i>1984</i>.  It is current, not future, technology that makes it possible.  The only reason this isn’t a major issue is because other copies of <i>1984 </i>are so readily available – precisely because digitally centralized copies have yet to fully take root.  This is not literally cloud computing; for the period of time the user possessed <i>1984</i>, it technically resided physically on his or her Kindle.  But because it is not the user’s to copy or to process, and it is Amazon’s power to reach in and revise or manipulate, it is as good as a Google Books configuration—or, in this case, as bad.</p>
    <p>By 2030, a majority of global communications, commerce, and information storage will take place online. Much of this activity will be routed through a small set of corporate, governmental and institutional actors. For much but not all of our online history, a limited number of corporate actors have framed the way people interact online.  In the 1990s, it was the online service providers such as Prodigy and AOL that regulated our nascent digital interactions. Today, most people have direct access to the Web, but now their online lives are described by consolidating corporate search engines, content providers, and social networking sites. </p>
    <p>With greater online centralization comes greater vulnerability, whether the centralization is public or private. Corporations are discrete entities, subject to pressures from repressive governments and criminal or terrorist threats. If Google’s services were to go offline tomorrow, the lives of millions of people would be disrupted. </p>
    <p>This risk grows more acute as both the importance and centralization of online services increase. The Internet already occupies a vital space in public and private life. By 2030, that place will only be more vital. Threats to cybersecurity will thus present threats to human rights and civil liberties. Disruptions in access to cloud-hosted services will cut off the primary and perhaps the only socially safe mode of communication for journalists, political activists, and ordinary citizens in countries around the world. Corrupt governments need not bother producing their own propaganda if selective Internet filtering can provide just as sure a technique of controlling citizens’ access to and perception of news and other information. This is the Fort Knox problem: a single bottleneck in the path to data, or a single logical trove where we put all our eggs in one basket.</p>
    <p>This scenario has implications here for both free speech and cybersecurity.  The Fort Knox mentality exposes vulnerable speech to unilateral and obliterating censorship: losing the inherent leakiness of the present model means we lose the benefits of the redundancies it creates.  These redundancies protect our civil liberty and security in ways as important as a constitutional right scheme.  Indeed, the Constitution is interpreted with such reality in mind.  The ease with which an order can be upheld to impound copyright infringing materials, or to destroy defamatory texts, can only be understood in the context of how difficult such actions are to undertake in the world of 2010.  Those difficulties make such actions rare and expensive.  Should the difficulties in censorship diminish or evaporate, there is no guarantee that compensating protections would be enacted by Congress or fashioned by judges.</p>
    <p>Moreover, threats to free speech online come not only from governments wishing to censor through the mechanisms of law but from <i>anyone</i> wishing to censor through the mechanisms of cyberattack, such as denial of service.  If a site has unpopular or sensitive content it can find itself brought down – forced to either abandon its message or seek shelter under the umbrella of a well-protected corporate information hosting apparatus.  Such companies may charge accordingly for their services – or, fearing that they will be swamped by a retargeted attack, refuse to host at all.  This is why the more traditional government censorship configurations are best understood with a cybersecurity counterpart.</p>
    <p>That which appears safer in the short term for cybersecurity – putting all our bits in the hands of a few centralized corporations – makes traditional censorship easier.</p>
    <p>The key to solving the Fort Knox problem is to make the current decentralized Web a more robust one.  This can be done by reforging the technological relationships sites and services have with each other on the Web, drawing conceptually from mutual aid treaties among states in the real world., Mutual aid lets us envision a new socially- and technologically-based system of redundancy and security. </p>
    <div>
      
<br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            
              <sup>[1]</sup>
            
          </a>
          
            <i>
              <sup> </sup>See</i> <i>generally</i> Google Books Settlement Agreement, 
          <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~books.google.com/‌googlebooks/‌agreement/">
            http://books.google.com/‌googlebooks/‌agreement/
          </a>
           (last visited Apr. 7, 2010).
<br>
          <a href="#_ftnref2" name="_ftn2">
            
              <sup>[2]</sup>
            
          </a>
          
            <sup> </sup>See John M. Ockerbloom, Books Banned Online, 
          <a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~onlinebooks.library.upenn.edu/banned-books.html">
            http://onlinebooks.library.upenn.edu/banned-books.html
          </a>
          ; Jonathan Zittrain, <i>The Future of the Internet – And How to Stop It </i>(Yale: 2008), p. 116.
<br>
          <a href="#_ftnref3" name="_ftn3">
            
              <sup>[3]</sup>
            
          </a>
          
            <i>
              <sup> </sup>See</i> Brad Stone, <i>Amazon Erases Two Classics from Kindle.  (One Is ‘1984.’)</i>, N.Y. Times, July 18, 2009, at B1.
        </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2011/1/27-internet-treaty-zittrain/0127_internet_treaty_zittrain.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Jonathan Zittrain</li>
		</ul>
	</div><div>
		Image Source: Reza Estakhrian
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487892/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487892/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fi%2fik%2520io%2finternet_handshake001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487892/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2011/01/21-reproductive-technology-robertson?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{09DC9A41-876C-4D79-97D0-03EE59C2B2D1}</guid><link>http://webfeeds.brookings.edu/~/65487893/0/brookingsrss/series/futureoftheconstitution~Reproductive-Rights-and-Reproductive-Technology-in</link><title>Reproductive Rights and Reproductive Technology in 2030 </title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/r/ra%20re/reproductive_lab001_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>INTRODUCTION</b></p><p><p class="bodytextfirstpar">Larry, a pediatrician, and David, a wills lawyer, meet in their late 20s, fall in love, and marry on June 15, 2025 in Indianapolis. Three years later they take in a foster child for eight months, and find the experience rewarding. By 2030, they are well-enough established in their careers to think about having their own child. Larry’s 24-year-old sister Marge has agreed to donate her eggs, and David will provide the sperm, so that each partner will have a genetic connection with the child. They work with an agency that matches couples with gestational surrogates, and settle on Janice, a 34-year-old nurse and mother of two, who is willing to help them in exchange for a $75,000 fee.</p>
    <p>In the process, Larry and David come to realize that they would prefer to have a male child that shares their sexual orientation. Reproductive cloning won’t do—the FDA hasn’t yet certified it as safe and effective.  But gene studies show a strong correlation between five genes and sexual orientation in both males and females. Larry and David discuss with their doctors the feasibility of screening the embryos they create with Marge’s eggs for male genes linked to a homosexual orientation. The clinic doctors are experts in embryo screening and alteration, but cannot guarantee that the resulting embryos will in fact turn out to be homosexual. To increase the certainty, they will insert additional “gay gene” sequences in the embryos before they are placed in Janice.  Embryos not used will be frozen for later use or for stem cell technology to create eggs from Larry’s skin cells so that the resulting child would be the genetic offspring of both Larry and David.</p>
    <p>The scenario painted here is futuristic, but only partially so. The techniques to be used—IVF, egg donation, and gestational surrogacy—are now widely available, as is embryo screening for genetic disease and gender.  Same-sex marriage is likely to be soon recognized as a federal constitutional right.  No “gay genes” have yet been identified. But genomic knowledge is mushrooming. The genetic code for nonmedical traits such as sexual orientation may be unlocked in coming years. Altering a person’s genes by inserting or deleting DNA sequences is still theoretical, but great progress has occurred with animals.  Cloning is unlikely to be available by 2030, but producing gametes from somatic cells in a person’s body might by then be feasible.</p>
    <p>Technical prowess, however, should not be confused with ethical and social acceptability. The 30 years between 1980 and 2010, when assisted reproduction, egg donation, surrogacy, and genetic screening of embryos became widely used, has been fraught with ethical, legal, and social controversy. These techniques pose major challenges for deeply held values of autonomy, family, the welfare of children, and the importance of reproduction to human flourishing. They call starkly into question the meaning of kinship, parenthood, and the degree of control which parents should have over their children’s genes.  Increased genetic screening, alteration of genes, and cloning or obtaining gametes from somatic cells will be even more contested. </p>
    <p>In America, the law is usually entwined in public controversy, especially when sexual, family, and reproductive norms clash. Yet these techniques were launched and found a home with little legal scrutiny. Outside of abortion, the law has been largely absent from battles over reproductive and genetic technologies. As a result, there are few Supreme Court precedents directly on point. Legislation, for example, has not restrained the use of cutting-edge genetic technologies such as embryo screening and manipulation. Reproductive cloning, though not yet feasible, is highly controversial but most states have not banned it. When the law has inched forward with legal solutions to particular problems, such as disputes over lost or frozen embryos or how to share parentage among gamete donors and rearing parents, new techniques with new problems have sprung up. In Larry and David’s case, it is the desire to choose their child’s gender and shape its sexual orientation that is novel and challenging.</p>
    <p>That situation might change as techniques evolve to expand choice over the genetic characteristics of children. Eventually, legal limits will be imposed, limits which will raise questions about the constitutionality of restrictions on reproductive and genetic choice. The Constitution, however, was written in an era when none of these techniques were practiced or even imagined. Indeed, it said nothing about reproduction at all. As a result, the Supreme Court has spoken often and most recently about abortion and contraception but seldom about engaging in reproduction as such, and not at all about parents’ choosing the genes of their offspring.<a href="#_edn1" name="_ednref1">[i]</a> When it did speak against forced sterilization of criminals, it assumed heterosexual and coital conception.<a href="#_edn2" name="_ednref2">[ii]</a> The principles underlying those decisions provide general guideposts for procreative rights in a technological age, but the specifics of those rights will have to be teased out from the logic of those precedents as new techniques challenge old values and new options for reproduction open. Larry and David and thousands of other couples will need answers so that they can have the families they wish. </p>
    <div>
      <br clear="all">
      <hr align="left" width="33%">
      <div id="edn1">
        <p>
          <a href="#_ednref1" name="_edn1">
            [i] </a>
          
            <i>Gonzalez v. Carhart</i>, 550 U.S. 124 (2007).<br>
          <a href="#_ednref2" name="_edn2">
            [ii] </a>
          
            <i>Skinner v. Oklahoma,</i> 316 U.S. 535 (1942). </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2011/1/21-reproductive-technology-robertson/0121_reproductive_technology_robertson.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>John A. Robertson</li>
		</ul>
	</div><div>
		Image Source: Visuals Unlimited
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487893/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fr%2fra%2520re%2freproductive_lab001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Fri, 21 Jan 2011 16:51:00 -0500</pubDate><dc:creator>John A. Robertson</dc:creator>
<itunes:summary> 
INTRODUCTION
Larry, a pediatrician, and David, a wills lawyer, meet in their late 20s, fall in love, and marry on June 15, 2025 in Indianapolis. Three years later they take in a foster child for eight months, and find the experience rewarding. By 2030, they are well-enough established in their careers to think about having their own child. Larry&#x2019;s 24-year-old sister Marge has agreed to donate her eggs, and David will provide the sperm, so that each partner will have a genetic connection with the child. They work with an agency that matches couples with gestational surrogates, and settle on Janice, a 34-year-old nurse and mother of two, who is willing to help them in exchange for a $75,000 fee. 
In the process, Larry and David come to realize that they would prefer to have a male child that shares their sexual orientation.&#xA0;Reproductive cloning won&#x2019;t do&#x2014;the FDA hasn&#x2019;t yet certified it as safe and effective. &#xA0;But gene studies show a strong correlation between five genes and sexual orientation in both males and females. Larry and David discuss with their doctors the feasibility of screening the embryos they create with Marge&#x2019;s eggs for male genes linked to a homosexual orientation. The clinic doctors are experts in embryo screening and alteration, but cannot guarantee that the resulting embryos will in fact turn out to be homosexual. To increase the certainty, they will insert additional &#8220;gay gene&#8221; sequences in the embryos before they are placed in Janice. &#xA0;Embryos not used will be frozen for later use or for stem cell technology to create eggs from Larry&#x2019;s skin cells so that the resulting child would be the genetic offspring of both Larry and David. 
The scenario painted here is futuristic, but only partially so. The techniques to be used&#x2014;IVF, egg donation, and gestational surrogacy&#x2014;are now widely available, as is embryo screening for genetic disease and gender. &#xA0;Same-sex marriage is likely to be soon recognized as a federal constitutional right. &#xA0;No &#8220;gay genes&#8221; have yet been identified. But genomic knowledge is mushrooming. The genetic code for nonmedical traits such as sexual orientation may be unlocked in coming years. Altering a person&#x2019;s genes by inserting or deleting DNA sequences is still theoretical, but great progress has occurred with animals.&#xA0; Cloning is unlikely to be available by 2030, but producing gametes from somatic cells in a person&#x2019;s body might by then be feasible. 
Technical prowess, however, should not be confused with ethical and social acceptability. The 30 years between 1980 and 2010, when assisted reproduction, egg donation, surrogacy, and genetic screening of embryos became widely used, has been fraught with ethical, legal, and social controversy. These techniques pose major challenges for deeply held values of autonomy, family, the welfare of children, and the importance of reproduction to human flourishing. They call starkly into question the meaning of kinship, parenthood, and the degree of control which parents should have over their children&#x2019;s genes.&#xA0; Increased genetic screening, alteration of genes, and cloning or obtaining gametes from somatic cells will be even more contested. 
In America, the law is usually entwined in public controversy, especially when sexual, family, and reproductive norms clash. Yet these techniques were launched and found a home with little legal scrutiny. Outside of abortion, the law has been largely absent from battles over reproductive and genetic technologies. As a result, there are few Supreme Court precedents directly on point. Legislation, for example, has not restrained the use of cutting-edge genetic technologies such as embryo screening and manipulation. Reproductive cloning, though not yet feasible, is highly controversial but most states have not banned it. When the law has inched forward with legal solutions to ... </itunes:summary>
<itunes:subtitle>INTRODUCTION
Larry, a pediatrician, and David, a wills lawyer, meet in their late 20s, fall in love, and marry on June 15, 2025 in Indianapolis. Three years later they take in a foster child for eight months, and find the experience rewarding.</itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/r/ra%20re/reproductive_lab001_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>INTRODUCTION</b></p><p><p class="bodytextfirstpar">Larry, a pediatrician, and David, a wills lawyer, meet in their late 20s, fall in love, and marry on June 15, 2025 in Indianapolis. Three years later they take in a foster child for eight months, and find the experience rewarding. By 2030, they are well-enough established in their careers to think about having their own child. Larry’s 24-year-old sister Marge has agreed to donate her eggs, and David will provide the sperm, so that each partner will have a genetic connection with the child. They work with an agency that matches couples with gestational surrogates, and settle on Janice, a 34-year-old nurse and mother of two, who is willing to help them in exchange for a $75,000 fee.</p>
    <p>In the process, Larry and David come to realize that they would prefer to have a male child that shares their sexual orientation. Reproductive cloning won’t do—the FDA hasn’t yet certified it as safe and effective.  But gene studies show a strong correlation between five genes and sexual orientation in both males and females. Larry and David discuss with their doctors the feasibility of screening the embryos they create with Marge’s eggs for male genes linked to a homosexual orientation. The clinic doctors are experts in embryo screening and alteration, but cannot guarantee that the resulting embryos will in fact turn out to be homosexual. To increase the certainty, they will insert additional “gay gene” sequences in the embryos before they are placed in Janice.  Embryos not used will be frozen for later use or for stem cell technology to create eggs from Larry’s skin cells so that the resulting child would be the genetic offspring of both Larry and David.</p>
    <p>The scenario painted here is futuristic, but only partially so. The techniques to be used—IVF, egg donation, and gestational surrogacy—are now widely available, as is embryo screening for genetic disease and gender.  Same-sex marriage is likely to be soon recognized as a federal constitutional right.  No “gay genes” have yet been identified. But genomic knowledge is mushrooming. The genetic code for nonmedical traits such as sexual orientation may be unlocked in coming years. Altering a person’s genes by inserting or deleting DNA sequences is still theoretical, but great progress has occurred with animals.  Cloning is unlikely to be available by 2030, but producing gametes from somatic cells in a person’s body might by then be feasible.</p>
    <p>Technical prowess, however, should not be confused with ethical and social acceptability. The 30 years between 1980 and 2010, when assisted reproduction, egg donation, surrogacy, and genetic screening of embryos became widely used, has been fraught with ethical, legal, and social controversy. These techniques pose major challenges for deeply held values of autonomy, family, the welfare of children, and the importance of reproduction to human flourishing. They call starkly into question the meaning of kinship, parenthood, and the degree of control which parents should have over their children’s genes.  Increased genetic screening, alteration of genes, and cloning or obtaining gametes from somatic cells will be even more contested. </p>
    <p>In America, the law is usually entwined in public controversy, especially when sexual, family, and reproductive norms clash. Yet these techniques were launched and found a home with little legal scrutiny. Outside of abortion, the law has been largely absent from battles over reproductive and genetic technologies. As a result, there are few Supreme Court precedents directly on point. Legislation, for example, has not restrained the use of cutting-edge genetic technologies such as embryo screening and manipulation. Reproductive cloning, though not yet feasible, is highly controversial but most states have not banned it. When the law has inched forward with legal solutions to particular problems, such as disputes over lost or frozen embryos or how to share parentage among gamete donors and rearing parents, new techniques with new problems have sprung up. In Larry and David’s case, it is the desire to choose their child’s gender and shape its sexual orientation that is novel and challenging.</p>
    <p>That situation might change as techniques evolve to expand choice over the genetic characteristics of children. Eventually, legal limits will be imposed, limits which will raise questions about the constitutionality of restrictions on reproductive and genetic choice. The Constitution, however, was written in an era when none of these techniques were practiced or even imagined. Indeed, it said nothing about reproduction at all. As a result, the Supreme Court has spoken often and most recently about abortion and contraception but seldom about engaging in reproduction as such, and not at all about parents’ choosing the genes of their offspring.<a href="#_edn1" name="_ednref1">[i]</a> When it did speak against forced sterilization of criminals, it assumed heterosexual and coital conception.<a href="#_edn2" name="_ednref2">[ii]</a> The principles underlying those decisions provide general guideposts for procreative rights in a technological age, but the specifics of those rights will have to be teased out from the logic of those precedents as new techniques challenge old values and new options for reproduction open. Larry and David and thousands of other couples will need answers so that they can have the families they wish. </p>
    <div>
      
<br clear="all">
      <hr align="left" width="33%">
      <div id="edn1">
        <p>
          <a href="#_ednref1" name="_edn1">
            [i] </a>
          
            <i>Gonzalez v. Carhart</i>, 550 U.S. 124 (2007).
<br>
          <a href="#_ednref2" name="_edn2">
            [ii] </a>
          
            <i>Skinner v. Oklahoma,</i> 316 U.S. 535 (1942). </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2011/1/21-reproductive-technology-robertson/0121_reproductive_technology_robertson.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>John A. Robertson</li>
		</ul>
	</div><div>
		Image Source: Visuals Unlimited
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487893/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487893/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fr%2fra%2520re%2freproductive_lab001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487893/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2010/12/28-neuroscience-snead?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{0EB00BB3-3097-40CD-94D1-D1D06F69E702}</guid><link>http://webfeeds.brookings.edu/~/65487894/0/brookingsrss/series/futureoftheconstitution~Cognitive-Neuroscience-and-the-Future-of-Punishment</link><title>Cognitive Neuroscience and the Future of Punishment</title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/n/na%20ne/neuroscience001_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>INTRODUCTION</b></p><p><p>he jurors filed into the courtroom and took their seats in the jury box.  It had been a long and emotionally draining couple of weeks.  The guilt phase of the trial was relatively short — there was no real question of fact as to whether the defendant had murdered the two victims.  The main contested questions — the defendant’s legal competence, sanity, and capacity to formulate the requisite <i>mens rea</i> for first degree murder — were also not terribly difficult to decide.  Though clearly emotionally troubled and probably even mentally ill, the defendant easily met the (surprisingly low) cognitive and volitional standards for guilt.  He knew what he was doing, and appreciated that it was wrongful.  He acted with malice aforethought.  He could understand the charges against him and assist in his own defense.  These were not hard questions.</p>
    <p>The sentencing phase of the trial, by contrast, had proven far more difficult to bear.  The prosecutor had described in excruciating detail the murders themselves in an effort to show that they were especially “heinous, atrocious, and cruel, manifesting exceptional depravity.”  The prosecutor and counsel for the defense each recounted the details of the defendant’s life and character.  His broken childhood, marked by unspeakable abuse and neglect.  His years of drug and alcohol use.  His spotty and unstable employment history.  His history of using violence to impose his will and pursue his interests.  They even discussed the structure and function of his brain — with reference to an array of colorful poster-board sized images — showing diminished activity in the prefrontal cortex (the seat of reasoning, self-restraint, long term planning) and above-average activity in his limbic system (the more primitive part of his brain, associated with fear and aggression).  Relying on a raft of neuroimaging studies, the prosecutor argued that the pattern of activation and structural abnormalities in the defendant’s brain were consistent with “low arousal, poor fear conditioning, lack of conscience, and decision-making deficits that have been found to characterize antisocial, psychopathic behavior.”  He further argued that this was not a temporary condition — it was permanent and unlikely to be correctable by any known therapeutic intervention.  The prosecutor argued that, taken together, this was the profile of an incorrigible criminal who would certainly kill again if given the chance.  The defense argued, to the contrary, that the evidence did not point to any tangible future risk of violence.</p>
    <p>The judge then explained to the jurors that they must decide unanimously what punishment was fitting for the crime of conviction:  life without parole or a sentence of death.  Among other things, the judge explained that “before the death penalty can be considered, the state must prove at least one statutorily-defined aggravating circumstance beyond a reasonable doubt” and that the aggravating factors outweigh all of the mitigating factors. These he described as “any fact or circumstance, relating to the crime or to the defendant’s state of mind or condition at the time of the crime, or to his character, background or record, that tends to suggest that a sentence other than death should be imposed.”   </p>
    <p>The judge looked up from his jury instructions and turned towards the jury box. “Ladies and gentlemen, let me add a word of caution regarding your judgment about mitigating factors.  Some of you may be tempted to ask yourselves ‘Was it really the defendant that did this?  Or was it his background?  Or his brain?’ You might be tempted to ask yourselves ‘What does this defendant <i>deserve</i> in light of his character, biology, and circumstances?’ Some of you might even be tempted to argue to your fellow jurors that ‘this man does not <i>deserve</i> the ultimate punishment in light of his diminished (though non-excusing) capacity to act responsibly borne from a bad past and a bad brain; capital punishment in this case is <i>disproportionate</i> to the defendant’s moral culpability.’” The judge’s eyes narrowed and he leaned even farther forward.  “But, Ladies and gentlemen of the jury, you must not ask such questions or entertain such ideas.  The sole question before you, as a matter of law, is much narrower. The <i>only</i> question you are to answer is this: is this defendant likely to present a future danger to others or society? You should treat every fact that suggests that he does present such a danger as an aggravating factor; every fact suggesting the contrary is a mitigating factor. Matters of ’desert,’ ‘retributive justice,’ or proportionality in light of moral culpability are immaterial to your decision. Ladies and gentlemen, this is the year 2040. Cognitive neuroscientists have long ago shown that ‘moral responsibility,’ ‘blameworthiness,’ and the like are unintelligible concepts that depend on an intuitive, libertarian notion of free will that is undermined by science. Such notions are, in the words of two of the most influential early proponents of this new approach to punishment, ‘illusions generated by our cognitive architecture.’ We have integrated this insight into our criminal law. Punishment is not for meting out ‘just deserts’ based on the fiction of moral responsibility. It is simply an instrument for promoting future social welfare.  We impose punishment solely to prevent future crime. And this change has been for the better.  As another pioneer of the revolution in punishment — himself an eminent cognitive neuroscientist — wisely wrote at the beginning of the twenty-first century: ‘Although it may seem dehumanizing to medicalize people into being broken cars, it can still be vastly more humane than moralizing them into being sinners.’ So, please ladies and gentlemen of the jury.  Keep your eye on the ball, and do not indulge any of the old and discredited notions about retributive justice.” With that, the judge adjourned and dismissed the jury so that it could begin its deliberations.</p>
    <p class="bodytextfirstpar">The above hypothetical is obviously fanciful.  But it borrows concepts and arguments directly from a current debate that has been unfolding alongside the advent of extraordinary advances in cognitive neuroscience (particularly as augmented by revolutionary imaging technology that affords novel ways to examine the structure and function of the brain). Such advances have breathed new life into very old arguments about human agency, moral responsibility, and the proper ends of criminal punishment. A prominent group of cognitive neuroscientists, joined by sympathetic philosophers, lawyers, and social scientists, have drawn upon the tools of their discipline in an effort to embarrass, discredit, and ultimately overthrow retribution as a distributive justification for punishment. The architects of this cognitive neuroscience project regard retribution as the root cause of the brutality and inhumanity of the American criminal justice system, generally, and the institution of capital punishment, in particular.  To replace retribution, they argue for the adoption of a criminal law regime animated solely by the forward-looking (consequentialist) aim of avoiding social harms.  This new framework, they hope, will usher in a new era of what some have referred to as “therapeutic justice” for criminal defendants, which is meant to be both more humane and more compassionate.  </p>
    <p>To be sure, not all cognitive neuroscientists subscribe to this program.  Indeed, there are many thoughtful voices who raise opposition to this project on various grounds — some prudential and some principled.  Whatever one thinks about the cognitive neuroscience project for criminal punishment, however, it deserves to be taken seriously and its arguments should be followed to their ultimate conclusions.  This is my aim in the present chapter.  In it, I will discuss the contours of the project and explore the radical conceptual challenge that it poses for criminal punishment in America.  I will also offer a critique of the project, arguing that jettisoning the notion of retributive justice in criminal punishment will not lead to a more humane legal regime as supporters of the project hope.  Rather, by untethering punishment from moral culpability and focusing entirely on the prediction and prevention of socially harmful behavior, the cognitive neuroscience project eliminates the last refuge of defendants who are legally and factually guilty, but who have diminished culpability owing to some aspect of their character, background, or biology.  Indeed, viewed through the lens urged by the cognitive neuroscience project, the only relevance of a non-excusing disposition to criminal behavior is as a justification for incapacitation.  The logic of the cognitive neuroscience project could even lead to the embrace of more aggressive use of preventive detention as a solution for categories of criminals that inspire special fears in the polity — including sexual predators and terrorists.</p>
    <p>The techniques of cognitive neuroscience are not yet sufficiently developed to support its aspirations. They may never be. But it is always wise to examine the consequences of a nascent moral-technological program <i>before</i> it is upon you and in widespread use. My purpose in this chapter is to take seriously the claims of the cognitive neuroscience project so that we may be clear-eyed about its consequences before we consider embracing it.  </p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2010/12/28-neuroscience-snead/1228_neuroscience_snead.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>O. Carter Snead</li>
		</ul>
	</div><div>
		Image Source: © Ho New / Reuters
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487894/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fn%2fna%2520ne%2fneuroscience001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Tue, 28 Dec 2010 11:47:00 -0500</pubDate><dc:creator>O. Carter Snead</dc:creator>
<itunes:summary> 
INTRODUCTION
he jurors filed into the courtroom and took their seats in the jury box.&#xA0; It had been a long and emotionally draining couple of weeks.&#xA0; The guilt phase of the trial was relatively short &#x2014; there was no real question of fact as to whether the defendant had murdered the two victims.&#xA0; The main contested questions &#x2014; the defendant&#x2019;s legal competence, sanity, and capacity to formulate the requisite mens rea for first degree murder &#x2014; were also not terribly difficult to decide.&#xA0; Though clearly emotionally troubled and probably even mentally ill, the defendant easily met the (surprisingly low) cognitive and volitional standards for guilt.&#xA0; He knew what he was doing, and appreciated that it was wrongful.&#xA0; He acted with malice aforethought.&#xA0; He could understand the charges against him and assist in his own defense.&#xA0; These were not hard questions. 
The sentencing phase of the trial, by contrast, had proven far more difficult to bear.&#xA0; The prosecutor had described in excruciating detail the murders themselves in an effort to show that they were especially &#8220;heinous, atrocious, and cruel, manifesting exceptional depravity.&#8221;&#xA0; The prosecutor and counsel for the defense each recounted the details of the defendant&#x2019;s life and character.&#xA0; His broken childhood, marked by unspeakable abuse and neglect.&#xA0; His years of drug and alcohol use.&#xA0; His spotty and unstable employment history.&#xA0; His history of using violence to impose his will and pursue his interests.&#xA0; They even discussed the structure and function of his brain &#x2014; with reference to an array of colorful poster-board sized images &#x2014; showing diminished activity in the prefrontal cortex (the seat of reasoning, self-restraint, long term planning) and above-average activity in his limbic system (the more primitive part of his brain, associated with fear and aggression).&#xA0; Relying on a raft of neuroimaging studies, the prosecutor argued that the pattern of activation and structural abnormalities in the defendant&#x2019;s brain were consistent with &#8220;low arousal, poor fear conditioning, lack of conscience, and decision-making deficits that have been found to characterize antisocial, psychopathic behavior.&#8221;&#xA0; He further argued that this was not a temporary condition &#x2014; it was permanent and unlikely to be correctable by any known therapeutic intervention.&#xA0; The prosecutor argued that, taken together, this was the profile of an incorrigible criminal who would certainly kill again if given the chance.&#xA0; The defense argued, to the contrary, that the evidence did not point to any tangible future risk of violence. 
The judge then explained to the jurors that they must decide unanimously what punishment was fitting for the crime of conviction:&#xA0; life without parole or a sentence of death.&#xA0; Among other things, the judge explained that &#8220;before the death penalty can be considered, the state must prove at least one statutorily-defined aggravating circumstance beyond a reasonable doubt&#8221; and that the aggravating factors outweigh all of the mitigating factors. These he described as &#8220;any fact or circumstance, relating to the crime or to the defendant&#x2019;s state of mind or condition at the time of the crime, or to his character, background or record, that tends to suggest that a sentence other than death should be imposed.&#8221;&#xA0;&#xA0; 
The judge looked up from his jury instructions and turned towards the jury box. &#8220;Ladies and gentlemen, let me add a word of caution regarding your judgment about mitigating factors.&#xA0; Some of you may be tempted to ask yourselves &#x2018;Was it really the defendant that did this?&#xA0; Or was it his background?&#xA0; Or his brain?&#x2019; You might be tempted to ask yourselves &#x2018;What does this defendant deserve in light of his character, biology, ... </itunes:summary>
<itunes:subtitle>INTRODUCTION
he jurors filed into the courtroom and took their seats in the jury box.&#xA0; It had been a long and emotionally draining couple of weeks.&#xA0; The guilt phase of the trial was relatively short &#x2014; there was no real question of ... </itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/n/na%20ne/neuroscience001_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>INTRODUCTION</b></p><p><p>he jurors filed into the courtroom and took their seats in the jury box.  It had been a long and emotionally draining couple of weeks.  The guilt phase of the trial was relatively short — there was no real question of fact as to whether the defendant had murdered the two victims.  The main contested questions — the defendant’s legal competence, sanity, and capacity to formulate the requisite <i>mens rea</i> for first degree murder — were also not terribly difficult to decide.  Though clearly emotionally troubled and probably even mentally ill, the defendant easily met the (surprisingly low) cognitive and volitional standards for guilt.  He knew what he was doing, and appreciated that it was wrongful.  He acted with malice aforethought.  He could understand the charges against him and assist in his own defense.  These were not hard questions.</p>
    <p>The sentencing phase of the trial, by contrast, had proven far more difficult to bear.  The prosecutor had described in excruciating detail the murders themselves in an effort to show that they were especially “heinous, atrocious, and cruel, manifesting exceptional depravity.”  The prosecutor and counsel for the defense each recounted the details of the defendant’s life and character.  His broken childhood, marked by unspeakable abuse and neglect.  His years of drug and alcohol use.  His spotty and unstable employment history.  His history of using violence to impose his will and pursue his interests.  They even discussed the structure and function of his brain — with reference to an array of colorful poster-board sized images — showing diminished activity in the prefrontal cortex (the seat of reasoning, self-restraint, long term planning) and above-average activity in his limbic system (the more primitive part of his brain, associated with fear and aggression).  Relying on a raft of neuroimaging studies, the prosecutor argued that the pattern of activation and structural abnormalities in the defendant’s brain were consistent with “low arousal, poor fear conditioning, lack of conscience, and decision-making deficits that have been found to characterize antisocial, psychopathic behavior.”  He further argued that this was not a temporary condition — it was permanent and unlikely to be correctable by any known therapeutic intervention.  The prosecutor argued that, taken together, this was the profile of an incorrigible criminal who would certainly kill again if given the chance.  The defense argued, to the contrary, that the evidence did not point to any tangible future risk of violence.</p>
    <p>The judge then explained to the jurors that they must decide unanimously what punishment was fitting for the crime of conviction:  life without parole or a sentence of death.  Among other things, the judge explained that “before the death penalty can be considered, the state must prove at least one statutorily-defined aggravating circumstance beyond a reasonable doubt” and that the aggravating factors outweigh all of the mitigating factors. These he described as “any fact or circumstance, relating to the crime or to the defendant’s state of mind or condition at the time of the crime, or to his character, background or record, that tends to suggest that a sentence other than death should be imposed.”   </p>
    <p>The judge looked up from his jury instructions and turned towards the jury box. “Ladies and gentlemen, let me add a word of caution regarding your judgment about mitigating factors.  Some of you may be tempted to ask yourselves ‘Was it really the defendant that did this?  Or was it his background?  Or his brain?’ You might be tempted to ask yourselves ‘What does this defendant <i>deserve</i> in light of his character, biology, and circumstances?’ Some of you might even be tempted to argue to your fellow jurors that ‘this man does not <i>deserve</i> the ultimate punishment in light of his diminished (though non-excusing) capacity to act responsibly borne from a bad past and a bad brain; capital punishment in this case is <i>disproportionate</i> to the defendant’s moral culpability.’” The judge’s eyes narrowed and he leaned even farther forward.  “But, Ladies and gentlemen of the jury, you must not ask such questions or entertain such ideas.  The sole question before you, as a matter of law, is much narrower. The <i>only</i> question you are to answer is this: is this defendant likely to present a future danger to others or society? You should treat every fact that suggests that he does present such a danger as an aggravating factor; every fact suggesting the contrary is a mitigating factor. Matters of ’desert,’ ‘retributive justice,’ or proportionality in light of moral culpability are immaterial to your decision. Ladies and gentlemen, this is the year 2040. Cognitive neuroscientists have long ago shown that ‘moral responsibility,’ ‘blameworthiness,’ and the like are unintelligible concepts that depend on an intuitive, libertarian notion of free will that is undermined by science. Such notions are, in the words of two of the most influential early proponents of this new approach to punishment, ‘illusions generated by our cognitive architecture.’ We have integrated this insight into our criminal law. Punishment is not for meting out ‘just deserts’ based on the fiction of moral responsibility. It is simply an instrument for promoting future social welfare.  We impose punishment solely to prevent future crime. And this change has been for the better.  As another pioneer of the revolution in punishment — himself an eminent cognitive neuroscientist — wisely wrote at the beginning of the twenty-first century: ‘Although it may seem dehumanizing to medicalize people into being broken cars, it can still be vastly more humane than moralizing them into being sinners.’ So, please ladies and gentlemen of the jury.  Keep your eye on the ball, and do not indulge any of the old and discredited notions about retributive justice.” With that, the judge adjourned and dismissed the jury so that it could begin its deliberations.</p>
    <p class="bodytextfirstpar">The above hypothetical is obviously fanciful.  But it borrows concepts and arguments directly from a current debate that has been unfolding alongside the advent of extraordinary advances in cognitive neuroscience (particularly as augmented by revolutionary imaging technology that affords novel ways to examine the structure and function of the brain). Such advances have breathed new life into very old arguments about human agency, moral responsibility, and the proper ends of criminal punishment. A prominent group of cognitive neuroscientists, joined by sympathetic philosophers, lawyers, and social scientists, have drawn upon the tools of their discipline in an effort to embarrass, discredit, and ultimately overthrow retribution as a distributive justification for punishment. The architects of this cognitive neuroscience project regard retribution as the root cause of the brutality and inhumanity of the American criminal justice system, generally, and the institution of capital punishment, in particular.  To replace retribution, they argue for the adoption of a criminal law regime animated solely by the forward-looking (consequentialist) aim of avoiding social harms.  This new framework, they hope, will usher in a new era of what some have referred to as “therapeutic justice” for criminal defendants, which is meant to be both more humane and more compassionate.  </p>
    <p>To be sure, not all cognitive neuroscientists subscribe to this program.  Indeed, there are many thoughtful voices who raise opposition to this project on various grounds — some prudential and some principled.  Whatever one thinks about the cognitive neuroscience project for criminal punishment, however, it deserves to be taken seriously and its arguments should be followed to their ultimate conclusions.  This is my aim in the present chapter.  In it, I will discuss the contours of the project and explore the radical conceptual challenge that it poses for criminal punishment in America.  I will also offer a critique of the project, arguing that jettisoning the notion of retributive justice in criminal punishment will not lead to a more humane legal regime as supporters of the project hope.  Rather, by untethering punishment from moral culpability and focusing entirely on the prediction and prevention of socially harmful behavior, the cognitive neuroscience project eliminates the last refuge of defendants who are legally and factually guilty, but who have diminished culpability owing to some aspect of their character, background, or biology.  Indeed, viewed through the lens urged by the cognitive neuroscience project, the only relevance of a non-excusing disposition to criminal behavior is as a justification for incapacitation.  The logic of the cognitive neuroscience project could even lead to the embrace of more aggressive use of preventive detention as a solution for categories of criminals that inspire special fears in the polity — including sexual predators and terrorists.</p>
    <p>The techniques of cognitive neuroscience are not yet sufficiently developed to support its aspirations. They may never be. But it is always wise to examine the consequences of a nascent moral-technological program <i>before</i> it is upon you and in widespread use. My purpose in this chapter is to take seriously the claims of the cognitive neuroscience project so that we may be clear-eyed about its consequences before we consider embracing it.  </p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2010/12/28-neuroscience-snead/1228_neuroscience_snead.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>O. Carter Snead</li>
		</ul>
	</div><div>
		Image Source: © Ho New / Reuters
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487894/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487894/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fn%2fna%2520ne%2fneuroscience001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487894/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2010/12/27-censorship-wu?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{B06925AD-6BAD-4BB0-883B-2A424BD6CB41}</guid><link>http://webfeeds.brookings.edu/~/65487895/0/brookingsrss/series/futureoftheconstitution~Is-Filtering-Censorship-The-Second-Free-Speech-Tradition</link><title>Is Filtering Censorship?  The Second Free Speech Tradition </title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/g/gk%20go/google_search001_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>INTRODUCTION</b></p><p><p>When Google merged with telecommunications giant AT&amp;T there was, of course, some opposition.  Some said, rather heatedly, that an information monopolist of a kind never seen before was in the works.  But given the state of the industry after the crash, and the shocking bankruptcy of Apple, there were few who would deny that some kind of merger was necessary.  Necessary, that is, not just to save jobs but to save the communications infrastructure that millions of Americans had come to depend on. After it went through, contrary to some of the dire warnings that came out, everything was much the same.   Google was still Google, the telephone company was still AT&amp;T, and after a while, much of the hubbub died down.</p>
    <p>It was a few years later that the rumors began, mostly leaks from former employees, suggesting that GT&amp;T (now AT&amp;T again) was up to something.  Some said the firm was fixing its search results and taking other steps to ensure that Google itself would never be displaced from its throne.   Of course, while it made for some good headlines, no one paid too much attention.  The fact is that there are always conspiracy theorists and disgruntled employees out there, no matter what the company. When GT&amp;T went ahead and acquired <em>The New York Times</em> as part of its public campaign to save the media, most people cheered.  Yes, there was some of typical outcry from usual sources, but then again, Comcast had been running NBC for years without incident.</p>
    <p>Looking back, I suppose it was really only after the Presidential election that you might say that things came to a head.  In a way, it might have been obvious that Governor Tilden, who’d pledged to aggressively enforce the antitrust laws, wasn’t going to be GT&amp;T’s favorite candidate.   That’s fine, and of course corporations have the right, just like any other person, to support or oppose a politician they don’t like.  But what only came out much later was the full extent of the company’s campaign against Tilden.  It turned out that every part of the information empire--from the news site to the media properties to the search engines, the mobile video, and the access to emails — all of it was mobilized to ensure Tilden’s defeat.  It retrospect, it was foolish for Tilden’s campaign to rely on GT&amp;T phones, Gmail and apps so heavily. Then again, doesn’t everyone?   </p>
    <p>Everyone knows the effect that the press can have on elections.  We’ve sort of come to expect that newspapers will take one side or another.   But no one quite understood or realized how important controlling the very information channels themselves would be--from mobile phones all the way through search and video. </p>
    <p>Well, Hayes is President, and nothing is going to change that.  But the whole incident has begun to make people wonder.  Should we be worried about the influence of the information channel over politics?  Are Google or AT&amp;T possibly subject to the First Amendment?   Are they common carriers, and if so, what does that mean for speech? </p>
    <p>Mention “speech” in America, and most people with legal training or an interest in the Constitution think immediately of the First Amendment and its champion, the United States Supreme Court.  The great story of free speech in America is the pamphleteer peddling an unpopular cause, defended by courts against arrest and the burning of his materials.  That is the central narrative taught in law schools, based loosely on Justice Holmes’ dissenting opinions<a href="#_ftn1" name="_ftnref1">[1]</a> and Harvard Law Professor Zechariah Chafee’s 1919 seminal paper, <i>Freedom of Speech in Wartime</i>.  Chafee wrote:</p>
    <blockquote dir="ltr">
      <p>The true meaning of freedom of speech seems to be this.  One of the most important purposes of society and government is the discovery and spread of truth on subjects of general concern.  This is possible only through absolutely unlimited discussion. . . Nevertheless, there are other purposes of government. . .  Unlimited discussion sometimes interferes with these purposes, which must then be balanced against freedom of speech…. The First Amendment gives binding force to this principle of political wisdom.<a href="#_ftn2" name="_ftnref2">[2]</a></p>
    </blockquote>
    <p>This is the first free speech tradition, the centerpiece of how free speech has been understood in America.<a href="#_ftn3" name="_ftnref3">[3]</a>  Yet while not irrelevant, it has become of secondary importance for many of the free speech questions of our times.   Instead, a second free speech tradition, dating from 1910 or the 1940s, much less well known, and barely taught in school, has slowly grown in importance.</p>
    <p>The second tradition is different.  It cares about the decisions made by concentrated, private intermediaries who control or carry speech.  It is a tradition where the main governmental agent is not the Supreme Court but the Interstate Commerce or Federal Communications Commission.  And in the second tradition the censors, as it were, are not government officials but private intermediaries, who are often lacking a censorial instinct of their own, but nonetheless vulnerable to censorial pressures from others.  Above all, it is a speech tradition linked to the technology of mass communications.</p>
    <p>In its heyday from the 1930s through the 1960s the second tradition was anchored in the common carriage rules applied to the telephone company and also, at times, to radio, and later on, in the cajoling of and the public interest duties imposed on broadcasters.   In its mid-century incarnation, the regime was a reaction to the concentration at every layer of the communications industry.   But today, the industry is different, and in our times, the concerns have changed.  As Jeffrey Rosen wrote in 2008, in the <i>New York Times Magazine</i>:</p>
    <p>At the moment, the person with the most control over free expression around the globe is not a judge, a president, or a monarch. She is Nicole Wong, deputy general counsel at Google. Wong is known within Google as “The Decider,” because she alone decides which blogs, videos, articles and other content is posted on YouTube, and which are removed in response to requests from governments and users ranging from the Thai King and the Pakistani prime minister to Hollywood corporations.<a href="#_ftn4" name="_ftnref4">[4]</a>Captured in this paragraph is an essential feature of the speech architecture of our times and how it affects the speech environment.   We live in an age where an enormous number of speakers, a “long tail” in popular lingo, are layered on top of a small number of very large speech intermediaries.<a href="#_ftn5" name="_ftnref5">[5]</a>  Consequently, understanding free speech in America has become a matter of understanding the behavior of intermediaries, whether motivated by their own scruples, law, or public pressure.</p>
    <p>The point of this essay is to suggest that anyone who wants to understand free speech in America in the 21st Century needs to understand the second tradition as deeply, if not more so, as the first.  That means understanding that the doctrines of common carriage and network neutrality are perhaps the most important speech-related laws of our times. As we shall see, it is a messier tradition and much less familiar, but no less important. </p>
    <div>
      <br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            [1] </a>
          E.g., <i>Abrams v. United States</i>, 250 U.S. 616 (1919) (Holmes, J., dissenting).<br>
          <a href="#_ftnref2" name="_ftn2">
            [2] </a>
          Zechariah Chafee, <i>Freedom of Speech in Wartime</i>, 32 Harv. L. Rev. 932, 956-57 (1919).<br>
          <a href="#_ftnref3" name="_ftn3">
            [3] </a>
          Scholars will know that describing Chafee’s <i>Free speech During Wartime </i>as representative of the First Free speech tradition is controversial, for Chafee is considered by some to have abrogated an older First Amendment tradition and constructed his own twentieth century “tradition.”  <i>See</i> Mark Graber, Transforming Free speech: The Ambiguous Legacy of Civil Libertarianism (1991).   It would probably be more accurate to speak of three, or four or five major speech traditions in the United States, and a few minor ones thrown in as well. <br>
          <a href="#_ftnref4" name="_ftn4">
            [4] </a>
          
            <i>See</i> Jeffrey Rosen, <i>Google’s Gatekeepers</i>, N. Y. Times Mag., Nov. 28, 2008, <i>available at</i> http://www.nytimes.com/2008/11/30/magazine/30google-t.html.<br>
          <a href="#_ftnref5" name="_ftn5">
            [5] </a>
          “For every diverse Long Tail there's a ‘Big Dog’: a boring standardized industry that isn't sexy like Apple…but that delivers all that niche content you're hungry for.” Tim Wu, <i>The Wrong Tail</i>, Slate, July 21, 2006, http://www.slate.com/id/2146225/. 
        </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2010/12/27-censorship-wu/1227_censorship_wu.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Tim Wu</li>
		</ul>
	</div><div>
		Image Source: © Mike Blake / Reuters
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487895/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fg%2fgk%2520go%2fgoogle_search001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Mon, 27 Dec 2010 16:35:00 -0500</pubDate><dc:creator>Tim Wu</dc:creator>
<itunes:summary> 
INTRODUCTION
When Google merged with telecommunications giant AT&amp;T there was, of course, some opposition.&#xA0; Some said, rather heatedly, that an information monopolist of a kind never seen before was in the works.&#xA0; But given the state of the industry after the crash, and the shocking bankruptcy of Apple, there were few who would deny that some kind of merger was necessary. &#xA0;Necessary, that is,&#xA0;not just to save jobs but to save the communications infrastructure that millions of Americans had come to depend on. After it went through, contrary to some of the dire warnings that came out, everything was much the same.&#xA0;&#xA0; Google was still Google, the telephone company was still AT&amp;T, and after a while, much of the hubbub died down. 
It was a few years later that the rumors began, mostly leaks from former employees, suggesting that GT&amp;T (now AT&amp;T again) was up to something.&#xA0; Some said the firm was fixing its search results and taking other steps to ensure that Google itself would never be displaced from its throne.&#xA0;&#xA0; Of course, while it made for some good headlines, no one paid too much attention.&#xA0; The fact is that there are always conspiracy theorists and disgruntled employees out there, no matter what the company. When GT&amp;T went ahead and acquired The New York Times as part of its public campaign to save the media, most people cheered.&#xA0; Yes, there was some of typical outcry from usual sources, but then again, Comcast had been running NBC for years without incident. 
Looking back, I suppose it was really only after the Presidential election that you might say that things came to a head.&#xA0; In a way, it might have been obvious that Governor Tilden, who&#x2019;d pledged to aggressively enforce the antitrust laws, wasn&#x2019;t going to be GT&amp;T&#x2019;s favorite candidate.&#xA0;&#xA0; That&#x2019;s fine, and of course corporations have the right, just like any other person, to support or oppose a politician they don&#x2019;t like.&#xA0; But what only came out much later was the full extent of the company&#x2019;s campaign against Tilden.&#xA0; It turned out that every part of the information empire--from the news site to the media properties to the search engines, the mobile video, and the access to emails &#x2014; all of it was mobilized to ensure Tilden&#x2019;s defeat.&#xA0; It retrospect, it was foolish for Tilden&#x2019;s campaign to rely on GT&amp;T phones, Gmail and apps so heavily. Then again, doesn&#x2019;t everyone?&#xA0;&#xA0; 
Everyone knows the effect that the press can have on elections.&#xA0; We&#x2019;ve sort of come to expect that newspapers will take one side or another.&#xA0; &#xA0;But no one quite understood or realized how important controlling the very information channels themselves would be--from mobile phones all the way through search and video. 
Well, Hayes is President, and nothing is going to change that.&#xA0; But the whole incident has begun to make people wonder.&#xA0; Should we be worried about the influence of the information channel over politics?&#xA0; Are Google or AT&amp;T possibly subject to the First Amendment?&#xA0;&#xA0; Are they common carriers, and if so, what does that mean for speech? 
Mention &#8220;speech&#8221; in America, and most people with legal training or an interest in the Constitution think immediately of the First Amendment and its champion, the United States Supreme Court.&#xA0; The great story of free speech in America is the pamphleteer peddling an unpopular cause, defended by courts against arrest and the burning of his materials.&#xA0; That is the central narrative taught in law schools, based loosely on Justice Holmes&#x2019; dissenting opinions[1] and Harvard Law Professor Zechariah Chafee&#x2019;s 1919 seminal paper, Freedom of Speech in Wartime.&#xA0; Chafee wrote: 
The true meaning of freedom of speech seems to be this.&#xA0; One of the most important purposes of society and ... </itunes:summary>
<itunes:subtitle>INTRODUCTION
When Google merged with telecommunications giant AT&amp;T there was, of course, some opposition.&#xA0; Some said, rather heatedly, that an information monopolist of a kind never seen before was in the works.</itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/g/gk%20go/google_search001_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>INTRODUCTION</b></p><p><p>When Google merged with telecommunications giant AT&amp;T there was, of course, some opposition.  Some said, rather heatedly, that an information monopolist of a kind never seen before was in the works.  But given the state of the industry after the crash, and the shocking bankruptcy of Apple, there were few who would deny that some kind of merger was necessary.  Necessary, that is, not just to save jobs but to save the communications infrastructure that millions of Americans had come to depend on. After it went through, contrary to some of the dire warnings that came out, everything was much the same.   Google was still Google, the telephone company was still AT&amp;T, and after a while, much of the hubbub died down.</p>
    <p>It was a few years later that the rumors began, mostly leaks from former employees, suggesting that GT&amp;T (now AT&amp;T again) was up to something.  Some said the firm was fixing its search results and taking other steps to ensure that Google itself would never be displaced from its throne.   Of course, while it made for some good headlines, no one paid too much attention.  The fact is that there are always conspiracy theorists and disgruntled employees out there, no matter what the company. When GT&amp;T went ahead and acquired <em>The New York Times</em> as part of its public campaign to save the media, most people cheered.  Yes, there was some of typical outcry from usual sources, but then again, Comcast had been running NBC for years without incident.</p>
    <p>Looking back, I suppose it was really only after the Presidential election that you might say that things came to a head.  In a way, it might have been obvious that Governor Tilden, who’d pledged to aggressively enforce the antitrust laws, wasn’t going to be GT&amp;T’s favorite candidate.   That’s fine, and of course corporations have the right, just like any other person, to support or oppose a politician they don’t like.  But what only came out much later was the full extent of the company’s campaign against Tilden.  It turned out that every part of the information empire--from the news site to the media properties to the search engines, the mobile video, and the access to emails — all of it was mobilized to ensure Tilden’s defeat.  It retrospect, it was foolish for Tilden’s campaign to rely on GT&amp;T phones, Gmail and apps so heavily. Then again, doesn’t everyone?   </p>
    <p>Everyone knows the effect that the press can have on elections.  We’ve sort of come to expect that newspapers will take one side or another.   But no one quite understood or realized how important controlling the very information channels themselves would be--from mobile phones all the way through search and video. </p>
    <p>Well, Hayes is President, and nothing is going to change that.  But the whole incident has begun to make people wonder.  Should we be worried about the influence of the information channel over politics?  Are Google or AT&amp;T possibly subject to the First Amendment?   Are they common carriers, and if so, what does that mean for speech? </p>
    <p>Mention “speech” in America, and most people with legal training or an interest in the Constitution think immediately of the First Amendment and its champion, the United States Supreme Court.  The great story of free speech in America is the pamphleteer peddling an unpopular cause, defended by courts against arrest and the burning of his materials.  That is the central narrative taught in law schools, based loosely on Justice Holmes’ dissenting opinions<a href="#_ftn1" name="_ftnref1">[1]</a> and Harvard Law Professor Zechariah Chafee’s 1919 seminal paper, <i>Freedom of Speech in Wartime</i>.  Chafee wrote:</p>
    <blockquote dir="ltr">
      <p>The true meaning of freedom of speech seems to be this.  One of the most important purposes of society and government is the discovery and spread of truth on subjects of general concern.  This is possible only through absolutely unlimited discussion. . . Nevertheless, there are other purposes of government. . .  Unlimited discussion sometimes interferes with these purposes, which must then be balanced against freedom of speech…. The First Amendment gives binding force to this principle of political wisdom.<a href="#_ftn2" name="_ftnref2">[2]</a></p>
    </blockquote>
    <p>This is the first free speech tradition, the centerpiece of how free speech has been understood in America.<a href="#_ftn3" name="_ftnref3">[3]</a>  Yet while not irrelevant, it has become of secondary importance for many of the free speech questions of our times.   Instead, a second free speech tradition, dating from 1910 or the 1940s, much less well known, and barely taught in school, has slowly grown in importance.</p>
    <p>The second tradition is different.  It cares about the decisions made by concentrated, private intermediaries who control or carry speech.  It is a tradition where the main governmental agent is not the Supreme Court but the Interstate Commerce or Federal Communications Commission.  And in the second tradition the censors, as it were, are not government officials but private intermediaries, who are often lacking a censorial instinct of their own, but nonetheless vulnerable to censorial pressures from others.  Above all, it is a speech tradition linked to the technology of mass communications.</p>
    <p>In its heyday from the 1930s through the 1960s the second tradition was anchored in the common carriage rules applied to the telephone company and also, at times, to radio, and later on, in the cajoling of and the public interest duties imposed on broadcasters.   In its mid-century incarnation, the regime was a reaction to the concentration at every layer of the communications industry.   But today, the industry is different, and in our times, the concerns have changed.  As Jeffrey Rosen wrote in 2008, in the <i>New York Times Magazine</i>:</p>
    <p>At the moment, the person with the most control over free expression around the globe is not a judge, a president, or a monarch. She is Nicole Wong, deputy general counsel at Google. Wong is known within Google as “The Decider,” because she alone decides which blogs, videos, articles and other content is posted on YouTube, and which are removed in response to requests from governments and users ranging from the Thai King and the Pakistani prime minister to Hollywood corporations.<a href="#_ftn4" name="_ftnref4">[4]</a>Captured in this paragraph is an essential feature of the speech architecture of our times and how it affects the speech environment.   We live in an age where an enormous number of speakers, a “long tail” in popular lingo, are layered on top of a small number of very large speech intermediaries.<a href="#_ftn5" name="_ftnref5">[5]</a>  Consequently, understanding free speech in America has become a matter of understanding the behavior of intermediaries, whether motivated by their own scruples, law, or public pressure.</p>
    <p>The point of this essay is to suggest that anyone who wants to understand free speech in America in the 21st Century needs to understand the second tradition as deeply, if not more so, as the first.  That means understanding that the doctrines of common carriage and network neutrality are perhaps the most important speech-related laws of our times. As we shall see, it is a messier tradition and much less familiar, but no less important. </p>
    <div>
      
<br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            [1] </a>
          E.g., <i>Abrams v. United States</i>, 250 U.S. 616 (1919) (Holmes, J., dissenting).
<br>
          <a href="#_ftnref2" name="_ftn2">
            [2] </a>
          Zechariah Chafee, <i>Freedom of Speech in Wartime</i>, 32 Harv. L. Rev. 932, 956-57 (1919).
<br>
          <a href="#_ftnref3" name="_ftn3">
            [3] </a>
          Scholars will know that describing Chafee’s <i>Free speech During Wartime </i>as representative of the First Free speech tradition is controversial, for Chafee is considered by some to have abrogated an older First Amendment tradition and constructed his own twentieth century “tradition.”  <i>See</i> Mark Graber, Transforming Free speech: The Ambiguous Legacy of Civil Libertarianism (1991).   It would probably be more accurate to speak of three, or four or five major speech traditions in the United States, and a few minor ones thrown in as well. 
<br>
          <a href="#_ftnref4" name="_ftn4">
            [4] </a>
          
            <i>See</i> Jeffrey Rosen, <i>Google’s Gatekeepers</i>, N. Y. Times Mag., Nov. 28, 2008, <i>available at</i> http://www.nytimes.com/2008/11/30/magazine/30google-t.html.
<br>
          <a href="#_ftnref5" name="_ftn5">
            [5] </a>
          “For every diverse Long Tail there's a ‘Big Dog’: a boring standardized industry that isn't sexy like Apple…but that delivers all that niche content you're hungry for.” Tim Wu, <i>The Wrong Tail</i>, Slate, July 21, 2006, http://www.slate.com/id/2146225/. 
        </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2010/12/27-censorship-wu/1227_censorship_wu.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Tim Wu</li>
		</ul>
	</div><div>
		Image Source: © Mike Blake / Reuters
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487895/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487895/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fg%2fgk%2520go%2fgoogle_search001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487895/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a>&nbsp;<div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2010/12/17-constitutional-interpretation-lessig?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{CD500539-3D5B-489C-B56B-A10DFE01712F}</guid><link>http://webfeeds.brookings.edu/~/65487896/0/brookingsrss/series/futureoftheconstitution~Translating-and-Transforming-the-Future</link><title>Translating and Transforming the Future</title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/s/su%20sz/supreme_court008_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>Introduction</b></p><p><p class="bodytextfirstpar">There’s a way that we academics talk about constitutional interpretation that suggests it to be more than it turns out to be. We speak of it as if the Court decides cases through elaborate (sometimes more, sometimes less) chains of reasoning. As if it were a Socratic dialog, with the author inviting the reader to the seven steps necessary to see why the conclusion follows.</p>
    <p>But constitutional interpretation is much more pedestrian and much more contingent. Whether the justices are reaching for particular results or not, opinions  rarely move far beyond what the context of the decision offers up. There’s a set of views taken-for-granted, at least by the majority, in a particular context; the opinion leverages those views to move the law one or two steps from where it starts. These taken-for-granted views include of course views about other parts of the law. But importantly for the purposes of this book, they include views of much more than the law. In particular, they include views about what’s technologically feasible, or morally acceptable, or culturally done.</p>
    <p>Think of constitutional interpretation as a game of Frogger—the old video game in which the player has to jump a frog across the road and avoid getting run over by passing cars. In particular, think of the level where the frog also has to cross a river by stepping onto logs as they pass by. The frog can’t simply pick up and move to the other side of the river. Instead, the frog moves one step at a time, as the opportunity for a move presents itself. The player doesn’t create the opportunity for a move. He simply finds himself with it, and he takes it, and waits for the next.   </p>
    <p>In this picture of constitutional interpretation, the critical bits are these opportunities for a move, a single move, provided by an interpretive context that the interpreter only slightly, if at all, can affect. (Of course in Frogger, he can’t affect them at all.) These moves get presented to the interpreter; they get constituted by the parts of an interpretive context that at least five justices treat as taken-for-granted, as obvious, as the stuff no one, or at least no one like them, needs to argue about. And it is in light of changes in this class of taken-for-granteds that change in constitutional law can happen. </p>
    <p>This dynamic helps show why predicting the future in constitutional law is so difficult. The challenge is not that we can’t describe all the elements the future will or could have. The difficulty is that we can’t know which elements will be obvious. For the critical, yet wholly under-theorized, bit to constitutional interpretation is not what the interpreters might argue about. It is the things that they take for granted. Constitutional meaning comes just as much from what everyone knows is true (both then and now) as from what the Framers actually wrote. Yet “what everyone knows is true” changes over time, and in ways that it is impossible to predict, even if quite possible to affect. </p>
    <p>Take an obvious example: The Constitution says: “The executive Power shall be vested in a President of the United States of America. He shall hold his Office during the Term of four Years.”</p>
    <p>It is unquestioned that “he” in this clause does not just mean “he” — unquestioned, at least, for us. For us, “he” means “he” or “she.” For the Framers, it would have been unquestioned that “he” just means “he.” It would have been unthinkable that Dolly Madison could have been President of the United States, or any other woman for that matter. Part of that unthinkability was tied to specific legal disabilities. But much more important was a broad and general understanding within the framing context — stuff that they took for granted, and the opposite of the stuff that we take for granted. And not just in the framing context. Opponents of the 14th Amendment argued that by its terms the amendment would radically remake the rights of women. Supporters of the 14th Amendment called the claim absurd. And maybe it was, until the Supreme Court actually did apply the Amendment to claims made by women, again because it was unthinkable that it would not.</p>
    <p>The practice of constitutional interpretation, or at least, any practice aiming at fidelity, must include an understanding of the sort of issues, or matters, that the authors took-for-granted. These elements must be understood because they mark the things the authors didn’t think it necessary to express: these were the things that everyone knows to be true — for example, the place of women in society, the salience of “certain unalienable rights,” the role of the law of nations, and so forth. To read what they wrote, and understand its meaning, thus requires understanding what they didn’t write, and how that also helps constitute their meaning.</p>
    <p>We know how to identify these taken-for-granteds about the past, if imperfectly and incompletely. History teaches some methods. They include accounts of the interpretive contexts, descriptions of the sort of issues that no one debated, and actions that reveal at least what no one was embarrassed to reveal. If someone had said to Hamilton, “Why aren’t there any women in Washington’s Cabinet?” he wouldn’t have been embarrassed by the question. He wouldn’t have understood it. That marks the disability attached to women as a fact of a certain kind. It went unmentioned, since it was not necessary to mention, since no one (among the authors at least) would have thought to dispute it.</p>
    <p>But we don’t know how to identify these taken-for-granteds with the future. We can talk about what sort of things will be obvious in 2030. I’m confident the equal status of women is not about to be drawn into doubt. And I’m also confident that the right of people to worship whatever god, or no god at all, will also remain as bedrock within our tradition. But a whole host of other issues and questions and beliefs will also be taken-for-granted then. And it would take a novelist with the skill of Tolstoy or Borges to fill out the details necessary for us to even glimpse that universe of uncontested truth, let alone to convince us of it.</p>
    <p>Even then, it wouldn’t feel uncontested to us. If a complete description of the world in 2030 would include the fact that most everyone accepted cloning as a necessary means to health (as many science fiction stories depict, for organ banking, for example), we would still experience that “fact” as something to be challenged, or  at least, questioned. I’m not even sure how to describe the mental state we would have to be able to adopt to be able to relate to the uncontested of the future the way the uncontested of the future would be experienced. It would be a possibility, or a scenario. But it wouldn’t have the force necessary to bend, or alter, the law the way it will, when it is in fact taken-for-granted by those who read.</p>
    <p>Until we could come to reckon these different taken-for-granteds, I want to argue, we can’t predict how constitutional interpretation in the future will proceed. It will follow the logs offered to the frog, but we can’t know which logs will present themselves when.  </p>
    <p>Take as an example the recent decision by the Supreme Court in Citizens United v. FEC,<a href="#_ftn1" name="_ftnref1"><sup><sup>[1]</sup></sup></a> upholding a constitutional right for corporations to spend an unlimited amount in independent campaign expenditures. While most criticize that decision for treating corporations as persons, in fact, the Court never invokes that long standing doctrine to support its judgment. Instead, the holding hangs upon a limit in government power, not the vitality of the personhood of corporations. </p>
    <p>But there is something about the status of corporations in today’s society that is essential to understanding how the Court decided as it did. If one imagined asking the Framers about the “unalienable rights,” as the Declaration of Independence puts it, that the Constitution intended to secure to corporations, it is perfectly clear they would have been puzzled by the question. Rights were the sort of things that “men” are “endowed” with, not legal entities. And while legal entities may well enjoy rights derivatively, as proxies for real human beings, that’s only when the thing they’re defending is something that, if taken away, a real human being would also necessarily lose. So a corporation should have the right to defend against the taking of its property, because the taking of its property necessarily involves the taking of the property of a real human being. Beyond that derivative, however, it would have been hard for them to understand the sense of this state granted privilege (which of course a limited liability corporation is) also enjoying “rights.” And impossible, I want to argue, for them to understand how this idea would lead to the morphing of the First Amendment to embrace a political speech right for this legal entity. </p>
    <p>For us, today, the idea of a corporation’s possessing these rights is an easier idea to comprehend. Corporations are common, and democratically created (in the sense that anyone can create them). And though they are radically different in wealth and power, we all see them as essential to important aspects of our life. They are familiar, pedestrian. It doesn’t seem weird to imagine them as constitutionally protected, even beyond the derivative protection for things like property.</p>
    <p>The familiarity of corporations, their ubiquity, and their importance all helped cover up a logical gap in the Supreme Court’s reasoning in Citizens United. In addressing the obvious (and in my view, conclusive) argument that these state created entities couldn’t possess any powers the state didn’t grant them, Justice Kennedy, quoting Justice Scalia, wrote “[i]t is rudimentary that the State cannot exact as the price of those special advantages the forfeiture of First Amendment rights.”<a href="#_ftn2" name="_ftnref2"><sup><sup>[2]</sup></sup></a></p>
    <p>But obviously, there were no “First Amendment rights” of humans that would be forfeited by saying that a legal entity created by the state doesn’t include among its powers the right to engage in political speech. To say something is “forfeited” is to say it existed and then was removed. But no rights of any humans are forfeited by a law that restricts a corporation. Humans would have all the rights they had to speak after such a law as before it. The only loser is the corporation. Yet so obviously familiar and native have corporations become, that Citizens United becomes a Bladerunner-like moment in Supreme Court history, where a human-created entity gets endowed with “unalienable rights.” </p>
    <p>I don’t mean (obviously) that everyone agrees with the conclusion or the protection recognized. Indeed, the decision has sparked an anti-corporate rage that may in the end defeat its premise. Instead, my point is that it wasn’t weird to recognize the rights the Court recognized, just as it wasn’t weird for the Plessy Court to treat segregation as “reasonable,”<a href="#_ftn3" name="_ftnref3"><sup><sup>[3]</sup></sup></a> or weird for Justice Bradley to write in <i>Bradwell v. The State</i>:</p>
    <p>[T]he civil law, as well as nature herself, has always recognized a wide difference in the respective spheres and destinies of man and woman. Man is, or should be, woman's protector and defender. The natural and proper timidity and delicacy which belongs to the female sex evidently unfits it for many of the occupations of civil life.<a href="#_ftn4" name="_ftnref4"><sup><sup>[4]</sup></sup></a></p>
    <p>To the contrary, these claims are only weird in light of a radically different baseline of taken-for-granteds. And while it is relatively easy in hindsight to see these differences, and remark on them, it is incredibly difficult to see them in the future, and believe them. Again, the Framers could not have predicted what the Supreme Court did, even if we had told them that corporations would be as common as clay. </p>
    <p>Consider one more try to make the very same point: Everyone (almost) recognizes in their parents views that are dated, or weird. Those might be views about race, or sexual orientation, or music. Whatever they are, they mark the distance between our parents and us. We can’t imagine ourselves holding such views, or viewing the world in light of them. </p>
    <p>But what are the views that we hold that our kids will react to similarly? What is the equivalent of racism, or homophobia, for them? And even if you could identify what those views are — maybe the idea that some of us still eat meat, or that we permit an industry to slaughter dolphins so that we can eat maguro — it is almost impossible for us to gin up the outrage or disgust about ourselves that they will certainly feel about us. Of course, they will love us, as we love our parents. But they will be distant from us, as we are from our parents, for reasons we couldn’t begin to feel as we feel the reasons that distance us from the generation before. </p>
    <p>Put most directly: The past is interpretively more accessible than the future. We can imagine it more fully, and feel the differences more completely. And that asymmetry affects fundamentally the ability to write an essay about what the Constitution in the future will hold. </p>
    <div>
      <br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            <sup>
              <sup>
                [1]
              </sup>
            </sup>
          </a>
           558 U.S. 50 (2010).<br>
          <a href="#_ftnref2" name="_ftn2">
            <sup>
              <sup>
                [2]
              </sup>
            </sup>
          </a>
           Citizens United, slip op. at 35. <br>
          <a href="#_ftnref3" name="_ftn3">
            <sup>
              <sup>
                [3]
              </sup>
            </sup>
          </a>
           Plessy v. Furguson, 163 U.S. 537 (1896).<br>
          <a href="#_ftnref4" name="_ftn4">
            <sup>
              <sup>
                [4]
              </sup>
            </sup>
          </a>
           Bradwell v. The State, 83 U.S. 130, 141 (1872).
        </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2010/12/17-constitutional-interpretation-lessig/1217_constitutional_interpretation_lessig.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Lawrence Lessig</li>
		</ul>
	</div><div>
		Image Source: © Larry Downing / Reuters
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487896/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fs%2fsu%2520sz%2fsupreme_court008_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a><div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Fri, 17 Dec 2010 08:56:00 -0500</pubDate><dc:creator>Lawrence Lessig</dc:creator>
<itunes:summary> 
Introduction
There&#x2019;s a way that we academics talk about constitutional interpretation that suggests it to be more than it turns out to be. We speak of it as if the Court decides cases through elaborate (sometimes more, sometimes less) chains of reasoning. As if it were a Socratic dialog, with the author inviting the reader to the seven steps necessary to see why the conclusion follows. 
But constitutional interpretation is much more pedestrian and much more contingent. Whether the justices are reaching for particular results or not, opinions &#xA0;rarely move far beyond what the context of the decision offers up. There&#x2019;s a set of views taken-for-granted, at least by the majority, in a particular context; the opinion leverages those views to move the law one or two steps from where it starts. These taken-for-granted views include of course views about other parts of the law. But importantly for the purposes of this book, they include views of much more than the law. In particular, they include views about what&#x2019;s technologically feasible, or morally acceptable, or culturally done. 
Think of constitutional interpretation as a game of Frogger&#x2014;the old video game in which the player has to jump a frog across the road and avoid getting run over by passing cars. In particular, think of the level where the frog also has to cross a river by stepping onto logs as they pass by. The frog can&#x2019;t simply pick up and move to the other side of the river. Instead, the frog moves one step at a time, as the opportunity for a move presents itself. The player doesn&#x2019;t create the opportunity for a move. He simply finds himself with it, and he takes it, and waits for the next.&#xA0;&#xA0; 
In this picture of constitutional interpretation, the critical bits are these opportunities for a move, a single move, provided by an interpretive context that the interpreter only slightly, if at all, can affect. (Of course in Frogger, he can&#x2019;t affect them at all.) These moves get presented to the interpreter; they get constituted by the parts of an interpretive context that at least five justices treat as taken-for-granted, as obvious, as the stuff no one, or at least no one like them, needs to argue about. And it is in light of changes in this class of taken-for-granteds that change in constitutional law can happen. 
This dynamic helps show why predicting the future in constitutional law is so difficult. The challenge is not that we can&#x2019;t describe all the elements the future will or could have. The difficulty is that we can&#x2019;t know which elements will be obvious. For the critical, yet wholly under-theorized, bit to constitutional interpretation is not what the interpreters might argue about. It is the things that they take for granted. Constitutional meaning comes just as much from what everyone knows is true (both then and now) as from what the Framers actually wrote. Yet &#8220;what everyone knows is true&#8221; changes over time, and in ways that it is impossible to predict, even if quite possible to affect. 
Take an obvious example: The Constitution says: &#8220;The executive Power shall be vested in a President of the United States of America. He shall hold his Office during the Term of four Years.&#8221; 
It is unquestioned that &#8220;he&#8221; in this clause does not just mean &#8220;he&#8221; &#x2014; unquestioned, at least, for us. For us, &#8220;he&#8221; means &#8220;he&#8221; or &#8220;she.&#8221; For the Framers, it would have been unquestioned that &#8220;he&#8221; just means &#8220;he.&#8221; It would have been unthinkable that Dolly Madison could have been President of the United States, or any other woman for that matter. Part of that unthinkability was tied to specific legal disabilities. But much more important was a broad and general understanding within the framing context &#x2014; stuff that they took for granted, and the opposite of the stuff that we take for ... </itunes:summary>
<itunes:subtitle>Introduction
There&#x2019;s a way that we academics talk about constitutional interpretation that suggests it to be more than it turns out to be. We speak of it as if the Court decides cases through elaborate (sometimes more, sometimes less)</itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/s/su%20sz/supreme_court008_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>Introduction</b></p><p><p class="bodytextfirstpar">There’s a way that we academics talk about constitutional interpretation that suggests it to be more than it turns out to be. We speak of it as if the Court decides cases through elaborate (sometimes more, sometimes less) chains of reasoning. As if it were a Socratic dialog, with the author inviting the reader to the seven steps necessary to see why the conclusion follows.</p>
    <p>But constitutional interpretation is much more pedestrian and much more contingent. Whether the justices are reaching for particular results or not, opinions  rarely move far beyond what the context of the decision offers up. There’s a set of views taken-for-granted, at least by the majority, in a particular context; the opinion leverages those views to move the law one or two steps from where it starts. These taken-for-granted views include of course views about other parts of the law. But importantly for the purposes of this book, they include views of much more than the law. In particular, they include views about what’s technologically feasible, or morally acceptable, or culturally done.</p>
    <p>Think of constitutional interpretation as a game of Frogger—the old video game in which the player has to jump a frog across the road and avoid getting run over by passing cars. In particular, think of the level where the frog also has to cross a river by stepping onto logs as they pass by. The frog can’t simply pick up and move to the other side of the river. Instead, the frog moves one step at a time, as the opportunity for a move presents itself. The player doesn’t create the opportunity for a move. He simply finds himself with it, and he takes it, and waits for the next.   </p>
    <p>In this picture of constitutional interpretation, the critical bits are these opportunities for a move, a single move, provided by an interpretive context that the interpreter only slightly, if at all, can affect. (Of course in Frogger, he can’t affect them at all.) These moves get presented to the interpreter; they get constituted by the parts of an interpretive context that at least five justices treat as taken-for-granted, as obvious, as the stuff no one, or at least no one like them, needs to argue about. And it is in light of changes in this class of taken-for-granteds that change in constitutional law can happen. </p>
    <p>This dynamic helps show why predicting the future in constitutional law is so difficult. The challenge is not that we can’t describe all the elements the future will or could have. The difficulty is that we can’t know which elements will be obvious. For the critical, yet wholly under-theorized, bit to constitutional interpretation is not what the interpreters might argue about. It is the things that they take for granted. Constitutional meaning comes just as much from what everyone knows is true (both then and now) as from what the Framers actually wrote. Yet “what everyone knows is true” changes over time, and in ways that it is impossible to predict, even if quite possible to affect. </p>
    <p>Take an obvious example: The Constitution says: “The executive Power shall be vested in a President of the United States of America. He shall hold his Office during the Term of four Years.”</p>
    <p>It is unquestioned that “he” in this clause does not just mean “he” — unquestioned, at least, for us. For us, “he” means “he” or “she.” For the Framers, it would have been unquestioned that “he” just means “he.” It would have been unthinkable that Dolly Madison could have been President of the United States, or any other woman for that matter. Part of that unthinkability was tied to specific legal disabilities. But much more important was a broad and general understanding within the framing context — stuff that they took for granted, and the opposite of the stuff that we take for granted. And not just in the framing context. Opponents of the 14th Amendment argued that by its terms the amendment would radically remake the rights of women. Supporters of the 14th Amendment called the claim absurd. And maybe it was, until the Supreme Court actually did apply the Amendment to claims made by women, again because it was unthinkable that it would not.</p>
    <p>The practice of constitutional interpretation, or at least, any practice aiming at fidelity, must include an understanding of the sort of issues, or matters, that the authors took-for-granted. These elements must be understood because they mark the things the authors didn’t think it necessary to express: these were the things that everyone knows to be true — for example, the place of women in society, the salience of “certain unalienable rights,” the role of the law of nations, and so forth. To read what they wrote, and understand its meaning, thus requires understanding what they didn’t write, and how that also helps constitute their meaning.</p>
    <p>We know how to identify these taken-for-granteds about the past, if imperfectly and incompletely. History teaches some methods. They include accounts of the interpretive contexts, descriptions of the sort of issues that no one debated, and actions that reveal at least what no one was embarrassed to reveal. If someone had said to Hamilton, “Why aren’t there any women in Washington’s Cabinet?” he wouldn’t have been embarrassed by the question. He wouldn’t have understood it. That marks the disability attached to women as a fact of a certain kind. It went unmentioned, since it was not necessary to mention, since no one (among the authors at least) would have thought to dispute it.</p>
    <p>But we don’t know how to identify these taken-for-granteds with the future. We can talk about what sort of things will be obvious in 2030. I’m confident the equal status of women is not about to be drawn into doubt. And I’m also confident that the right of people to worship whatever god, or no god at all, will also remain as bedrock within our tradition. But a whole host of other issues and questions and beliefs will also be taken-for-granted then. And it would take a novelist with the skill of Tolstoy or Borges to fill out the details necessary for us to even glimpse that universe of uncontested truth, let alone to convince us of it.</p>
    <p>Even then, it wouldn’t feel uncontested to us. If a complete description of the world in 2030 would include the fact that most everyone accepted cloning as a necessary means to health (as many science fiction stories depict, for organ banking, for example), we would still experience that “fact” as something to be challenged, or  at least, questioned. I’m not even sure how to describe the mental state we would have to be able to adopt to be able to relate to the uncontested of the future the way the uncontested of the future would be experienced. It would be a possibility, or a scenario. But it wouldn’t have the force necessary to bend, or alter, the law the way it will, when it is in fact taken-for-granted by those who read.</p>
    <p>Until we could come to reckon these different taken-for-granteds, I want to argue, we can’t predict how constitutional interpretation in the future will proceed. It will follow the logs offered to the frog, but we can’t know which logs will present themselves when.  </p>
    <p>Take as an example the recent decision by the Supreme Court in Citizens United v. FEC,<a href="#_ftn1" name="_ftnref1"><sup><sup>[1]</sup></sup></a> upholding a constitutional right for corporations to spend an unlimited amount in independent campaign expenditures. While most criticize that decision for treating corporations as persons, in fact, the Court never invokes that long standing doctrine to support its judgment. Instead, the holding hangs upon a limit in government power, not the vitality of the personhood of corporations. </p>
    <p>But there is something about the status of corporations in today’s society that is essential to understanding how the Court decided as it did. If one imagined asking the Framers about the “unalienable rights,” as the Declaration of Independence puts it, that the Constitution intended to secure to corporations, it is perfectly clear they would have been puzzled by the question. Rights were the sort of things that “men” are “endowed” with, not legal entities. And while legal entities may well enjoy rights derivatively, as proxies for real human beings, that’s only when the thing they’re defending is something that, if taken away, a real human being would also necessarily lose. So a corporation should have the right to defend against the taking of its property, because the taking of its property necessarily involves the taking of the property of a real human being. Beyond that derivative, however, it would have been hard for them to understand the sense of this state granted privilege (which of course a limited liability corporation is) also enjoying “rights.” And impossible, I want to argue, for them to understand how this idea would lead to the morphing of the First Amendment to embrace a political speech right for this legal entity. </p>
    <p>For us, today, the idea of a corporation’s possessing these rights is an easier idea to comprehend. Corporations are common, and democratically created (in the sense that anyone can create them). And though they are radically different in wealth and power, we all see them as essential to important aspects of our life. They are familiar, pedestrian. It doesn’t seem weird to imagine them as constitutionally protected, even beyond the derivative protection for things like property.</p>
    <p>The familiarity of corporations, their ubiquity, and their importance all helped cover up a logical gap in the Supreme Court’s reasoning in Citizens United. In addressing the obvious (and in my view, conclusive) argument that these state created entities couldn’t possess any powers the state didn’t grant them, Justice Kennedy, quoting Justice Scalia, wrote “[i]t is rudimentary that the State cannot exact as the price of those special advantages the forfeiture of First Amendment rights.”<a href="#_ftn2" name="_ftnref2"><sup><sup>[2]</sup></sup></a></p>
    <p>But obviously, there were no “First Amendment rights” of humans that would be forfeited by saying that a legal entity created by the state doesn’t include among its powers the right to engage in political speech. To say something is “forfeited” is to say it existed and then was removed. But no rights of any humans are forfeited by a law that restricts a corporation. Humans would have all the rights they had to speak after such a law as before it. The only loser is the corporation. Yet so obviously familiar and native have corporations become, that Citizens United becomes a Bladerunner-like moment in Supreme Court history, where a human-created entity gets endowed with “unalienable rights.” </p>
    <p>I don’t mean (obviously) that everyone agrees with the conclusion or the protection recognized. Indeed, the decision has sparked an anti-corporate rage that may in the end defeat its premise. Instead, my point is that it wasn’t weird to recognize the rights the Court recognized, just as it wasn’t weird for the Plessy Court to treat segregation as “reasonable,”<a href="#_ftn3" name="_ftnref3"><sup><sup>[3]</sup></sup></a> or weird for Justice Bradley to write in <i>Bradwell v. The State</i>:</p>
    <p>[T]he civil law, as well as nature herself, has always recognized a wide difference in the respective spheres and destinies of man and woman. Man is, or should be, woman's protector and defender. The natural and proper timidity and delicacy which belongs to the female sex evidently unfits it for many of the occupations of civil life.<a href="#_ftn4" name="_ftnref4"><sup><sup>[4]</sup></sup></a></p>
    <p>To the contrary, these claims are only weird in light of a radically different baseline of taken-for-granteds. And while it is relatively easy in hindsight to see these differences, and remark on them, it is incredibly difficult to see them in the future, and believe them. Again, the Framers could not have predicted what the Supreme Court did, even if we had told them that corporations would be as common as clay. </p>
    <p>Consider one more try to make the very same point: Everyone (almost) recognizes in their parents views that are dated, or weird. Those might be views about race, or sexual orientation, or music. Whatever they are, they mark the distance between our parents and us. We can’t imagine ourselves holding such views, or viewing the world in light of them. </p>
    <p>But what are the views that we hold that our kids will react to similarly? What is the equivalent of racism, or homophobia, for them? And even if you could identify what those views are — maybe the idea that some of us still eat meat, or that we permit an industry to slaughter dolphins so that we can eat maguro — it is almost impossible for us to gin up the outrage or disgust about ourselves that they will certainly feel about us. Of course, they will love us, as we love our parents. But they will be distant from us, as we are from our parents, for reasons we couldn’t begin to feel as we feel the reasons that distance us from the generation before. </p>
    <p>Put most directly: The past is interpretively more accessible than the future. We can imagine it more fully, and feel the differences more completely. And that asymmetry affects fundamentally the ability to write an essay about what the Constitution in the future will hold. </p>
    <div>
      
<br clear="all">
      <hr align="left" width="33%">
      <div id="ftn1">
        <p>
          <a href="#_ftnref1" name="_ftn1">
            <sup>
              <sup>
                [1]
              </sup>
            </sup>
          </a>
           558 U.S. 50 (2010).
<br>
          <a href="#_ftnref2" name="_ftn2">
            <sup>
              <sup>
                [2]
              </sup>
            </sup>
          </a>
           Citizens United, slip op. at 35. 
<br>
          <a href="#_ftnref3" name="_ftn3">
            <sup>
              <sup>
                [3]
              </sup>
            </sup>
          </a>
           Plessy v. Furguson, 163 U.S. 537 (1896).
<br>
          <a href="#_ftnref4" name="_ftn4">
            <sup>
              <sup>
                [4]
              </sup>
            </sup>
          </a>
           Bradwell v. The State, 83 U.S. 130, 141 (1872).
        </p>
      </div>
    </div></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2010/12/17-constitutional-interpretation-lessig/1217_constitutional_interpretation_lessig.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Lawrence Lessig</li>
		</ul>
	</div><div>
		Image Source: © Larry Downing / Reuters
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487896/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487896/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fs%2fsu%2520sz%2fsupreme_court008_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487896/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a><div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2010/12/08-4th-amendment-goldsmith?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{9347B903-C601-4463-84C5-CD8DEEBAE975}</guid><link>http://webfeeds.brookings.edu/~/65487897/0/brookingsrss/series/futureoftheconstitution~The-Cyberthreat-Government-Network-Operations-and-the-Fourth-Amendment</link><title>The Cyberthreat, Government Network Operations, and the Fourth Amendment</title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/c/cu%20cz/cybersecurity002_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>Introduction</b></p><p><p>Many corporations have intrusion-prevention systems on their computers’ connections to the Internet. These systems scan the contents and metadata of incoming communications for malicious code that might facilitate a cyber attack, and take steps to thwart it. The United States government will have a similar system in place soon.  But public and private intrusion-prevention systems are uncoordinated, and most firms and individual users lack such systems. This is one reason why the national communications network is swarming with known malicious cyber agents that raise the likelihood of an attack on a critical infrastructure system that could cripple our economic or military security.</p>
    <p>
To meet this threat, imagine that sometime in the near future the government mandates the use of a government-coordinated intrusion-prevention system throughout the domestic network to monitor all communications, including private ones.  Imagine, more concretely, that this system requires the National Security Agency to work with private firms in the domestic communication network to collect, copy, share, and analyze the content and metadata of all communications for indicators of possible computer attacks, and to take real-time steps to prevent such attacks.</p>
    <p>
This scenario, I argue in this essay, is one end point of government programs that are already up and running.  It is where the nation might be headed, though perhaps not before we first suffer a catastrophic cyber attack that will spur the government to take these steps.  Such a program would be controversial.  It would require congressional approval and in particular would require mechanisms that credibly establish that the NSA is not using extraordinary access to the private network for pernicious ends.  But with plausible assumptions, even such an aggressive program could be deemed consistent with the U.S. Constitution, including the Fourth Amendment.   </p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2010/12/08-4th-amendment-goldsmith/1208_4th_amendment_goldsmith.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Jack Goldsmith</li>
		</ul>
	</div><div>
		Image Source: © Hyungwon Kang / Reuters
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487897/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fc%2fcu%2520cz%2fcybersecurity002_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a><div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Wed, 08 Dec 2010 16:13:00 -0500</pubDate><dc:creator>Jack Goldsmith</dc:creator>
<itunes:summary> 
Introduction
Many corporations have intrusion-prevention systems on their computers&#x2019; connections to the Internet. These systems scan the contents and metadata of incoming communications for malicious code that might facilitate a cyber attack, and take steps to thwart it. The United States government will have a similar system in place soon. But public and private intrusion-prevention systems are uncoordinated, and most firms and individual users lack such systems. This is one reason why the national communications network is swarming with known malicious cyber agents that raise the likelihood of an attack on a critical infrastructure system that could cripple our economic or military security. 
To meet this threat, imagine that sometime in the near future the government mandates the use of a government-coordinated intrusion-prevention system throughout the domestic network to monitor all communications, including private ones. Imagine, more concretely, that this system requires the National Security Agency to work with private firms in the domestic communication network to collect, copy, share, and analyze the content and metadata of all communications for indicators of possible computer attacks, and to take real-time steps to prevent such attacks. 
This scenario, I argue in this essay, is one end point of government programs that are already up and running. It is where the nation might be headed, though perhaps not before we first suffer a catastrophic cyber attack that will spur the government to take these steps. Such a program would be controversial. It would require congressional approval and in particular would require mechanisms that credibly establish that the NSA is not using extraordinary access to the private network for pernicious ends. But with plausible assumptions, even such an aggressive program could be deemed consistent with the U.S. Constitution, including the Fourth Amendment. 
Downloads
 - Download the Full Paper 
Authors
 - Jack Goldsmith 
Image Source: &#xA9; Hyungwon Kang / Reuters</itunes:summary>
<itunes:subtitle>Introduction
Many corporations have intrusion-prevention systems on their computers&#x2019; connections to the Internet. These systems scan the contents and metadata of incoming communications for malicious code that might facilitate a cyber ... </itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/c/cu%20cz/cybersecurity002_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>Introduction</b></p><p><p>Many corporations have intrusion-prevention systems on their computers’ connections to the Internet. These systems scan the contents and metadata of incoming communications for malicious code that might facilitate a cyber attack, and take steps to thwart it. The United States government will have a similar system in place soon.  But public and private intrusion-prevention systems are uncoordinated, and most firms and individual users lack such systems. This is one reason why the national communications network is swarming with known malicious cyber agents that raise the likelihood of an attack on a critical infrastructure system that could cripple our economic or military security.</p>
    <p>
To meet this threat, imagine that sometime in the near future the government mandates the use of a government-coordinated intrusion-prevention system throughout the domestic network to monitor all communications, including private ones.  Imagine, more concretely, that this system requires the National Security Agency to work with private firms in the domestic communication network to collect, copy, share, and analyze the content and metadata of all communications for indicators of possible computer attacks, and to take real-time steps to prevent such attacks.</p>
    <p>
This scenario, I argue in this essay, is one end point of government programs that are already up and running.  It is where the nation might be headed, though perhaps not before we first suffer a catastrophic cyber attack that will spur the government to take these steps.  Such a program would be controversial.  It would require congressional approval and in particular would require mechanisms that credibly establish that the NSA is not using extraordinary access to the private network for pernicious ends.  But with plausible assumptions, even such an aggressive program could be deemed consistent with the U.S. Constitution, including the Fourth Amendment.   </p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2010/12/08-4th-amendment-goldsmith/1208_4th_amendment_goldsmith.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Jack Goldsmith</li>
		</ul>
	</div><div>
		Image Source: © Hyungwon Kang / Reuters
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487897/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487897/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fc%2fcu%2520cz%2fcybersecurity002_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487897/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a><div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2010/12/08-4th-amendment-slobogin?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{6D6AA814-252B-4BBD-AB05-1B0A06894BE6}</guid><link>http://webfeeds.brookings.edu/~/65487898/0/brookingsrss/series/futureoftheconstitution~Is-the-Fourth-Amendment-Relevant-in-a-Technological-Age</link><title>Is the Fourth Amendment Relevant in a Technological Age?</title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/v/va%20ve/vehicle_search001_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>Introduction</b></p><p><p>It’s the year 2015. Officer Jones, a New York City police officer, stops a car because it has a broken taillight. The driver of the car turns out to be a man named Ahmad Abdullah. Abdullah’s license and registration check out, but he seems nervous, at least to Jones. Jones goes back to his squad car and activates his Raytheon electromagnetic pulse scanner, which can scan the car for weapons and bombs. Nothing shows up on the screen. Nonetheless, he attaches a Global Positioning Device known as a Q-ball underneath the rear bumper as he pretends to be looking at Abdullah’s license plate.</p>
    <p>Over the next several weeks, New York police use the GPS device to track Abdullah’s travels throughout the New York City area. They also watch him take walks from his apartment, relying on public video cameras mounted on buildings and light poles. When cameras cannot capture his meanderings or he takes public transportation or travels in a friend’s car, the police use drone cameras, powerful enough to pick up the numbers on a license plate, to monitor him.  Police interest is piqued when they discover that he visits not only his local mosque but several other mosques around the New York area. They requisition his phone and Internet Service Provider records to ascertain the phone numbers and email addresses of the people with whom he communicates. Through digital sources, they also obtain his bank and credit card records.  For good measure, the police pay the data collection company Choicepoint for a report on all the information about Abdullah that can be gleaned from public records and Internet sources. Finally, since Abdullah tends to leave his windows uncurtained, police set up a Star-Tron—binoculars with nightvision capacity—in a building across the way from Abdullah’s apartment so they can watch him through his window.</p>
    <p>These various investigative maneuvers might lead to discovery that Abdullah is consorting with known terrorists. Or they might merely provide police with proof that Abdullah is an illegal immigrant. Then there’s always the possibility that Abdullah hasn’t committed any crime.   </p>
    <p>The important point for present purposes is that the Constitution has nothing to say about any of the police actions that take place in Abdullah’s case once his car is stopped. The constitutional provision that is most likely to be implicated by the government’s attempts to investigate Adbullah is the Fourth Amendment, which prohibits unreasonable searches of houses, persons, papers and effects, and further provides that, if a warrant is sought authorizing a search, it must be based on probable cause and describe with particularity the place to be searched and the person or thing to be seized. This language is the primary constitutional mechanism for regulating police investigations. The courts have held that, when police engage in a search, they must usually have probable cause—about a 50 percent certainty—that the search will produce evidence of crime, and must also have a warrant, issued by an independent magistrate, if there is time to get one. As construed by the United States Supreme Court, however, these requirements are irrelevant to many modern police practices, including most or all of those involved in Abdullah’s case.  </p>
    <p>The Fourth Amendment’s increasing irrelevance stems from the fact that the Supreme Court is mired in precedent decided in another era. Over the past 200 years, the Fourth Amendment’s guarantees have been construed largely in the context of what might be called “physical searches”—entry into a house or car; a stop and frisk of a person on the street; or rifling through a person’s private papers. But today, with the introduction of devices that can see through walls and clothes, monitor public thoroughfares twenty-four hours a day, and access millions of records in seconds, police are relying much more heavily on what might be called “virtual searches,” investigative techniques that do not require physical access to premises, people, papers or effects and that can often be carried out covertly from far away. As Abdullah’s case illustrates, this technological revolution is well on its way to drastically altering the way police go about looking for evidence of crime. To date, the Supreme Court’s interpretation of the Fourth Amendment has both failed to anticipate this revolution and continued to ignore it.</p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2010/12/08-4th-amendment-slobogin/1208_4th_amendment_slobogin.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Christopher Slobogin</li>
		</ul>
	</div><div>
		Image Source: © Brendan McDermid / Reuters
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487898/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fv%2fva%2520ve%2fvehicle_search001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a><div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Wed, 08 Dec 2010 16:13:00 -0500</pubDate><dc:creator>Christopher Slobogin</dc:creator>
<itunes:summary> 
Introduction
It&#x2019;s the year 2015.&#xA0;Officer Jones, a New York City police officer, stops a car because it has a broken taillight.&#xA0;The driver of the car turns out to be a man named Ahmad Abdullah.&#xA0;Abdullah&#x2019;s license and registration check out, but he seems nervous, at least to Jones.&#xA0;Jones goes back to his squad car and activates his Raytheon electromagnetic pulse scanner, which can scan the car for weapons and bombs.&#xA0;Nothing shows up on the screen.&#xA0;Nonetheless, he attaches a Global Positioning Device known as a Q-ball underneath the rear bumper as he pretends to be looking at Abdullah&#x2019;s license plate. 
Over the next several weeks, New York police use the GPS device to track Abdullah&#x2019;s travels throughout the New York City area.&#xA0;They also watch him take walks from his apartment, relying on public video cameras mounted on buildings and light poles.&#xA0;When cameras cannot capture his meanderings or he takes public transportation or travels in a friend&#x2019;s car, the police use drone cameras, powerful enough to pick up the numbers on a license plate, to monitor him.&#xA0; Police interest is piqued when they discover that he visits not only his local mosque but several other mosques around the New York area.&#xA0;They requisition his phone and Internet Service Provider records to ascertain the phone numbers and email addresses of the people with whom he communicates.&#xA0;Through digital sources, they also obtain his bank and credit card records.&#xA0; For good measure, the police pay the data collection company Choicepoint for a report on all the information about Abdullah that can be gleaned from public records and Internet sources.&#xA0;Finally, since Abdullah tends to leave his windows uncurtained, police set up a Star-Tron&#x2014;binoculars with nightvision capacity&#x2014;in a building across the way from Abdullah&#x2019;s apartment so they can watch him through his window. 
These various investigative maneuvers might lead to discovery that Abdullah is consorting with known terrorists.&#xA0;Or they might merely provide police with proof that Abdullah is an illegal immigrant.&#xA0;Then there&#x2019;s always the possibility that Abdullah hasn&#x2019;t committed any crime.&#xA0;&#xA0; 
The important point for present purposes is that the Constitution has nothing to say about any of the police actions that take place in Abdullah&#x2019;s case once his car is stopped.&#xA0;The constitutional provision that is most likely to be implicated by the government&#x2019;s attempts to investigate Adbullah is the Fourth Amendment, which prohibits unreasonable searches of houses, persons, papers and effects, and further provides that, if a warrant is sought authorizing a search, it must be based on probable cause and describe with particularity the place to be searched and the person or thing to be seized.&#xA0;This language is the primary constitutional mechanism for regulating police investigations.&#xA0;The courts have held that, when police engage in a search, they must usually have probable cause&#x2014;about a 50 percent certainty&#x2014;that the search will produce evidence of crime, and must also have a warrant, issued by an independent magistrate, if there is time to get one. As construed by the United States Supreme Court, however, these requirements are irrelevant to many modern police practices, including most or all of those involved in Abdullah&#x2019;s case.&#xA0; 
The Fourth Amendment&#x2019;s increasing irrelevance stems from the fact that the Supreme Court is mired in precedent decided in another era.&#xA0;Over the past 200 years, the Fourth Amendment&#x2019;s guarantees have been construed largely in the context of what might be called &#8220;physical searches&#8221;&#x2014;entry into a house or car; a stop and frisk of a person on the street; or rifling through a person&#x2019;s private papers.&#xA0;But today, with the introduction of devices ... </itunes:summary>
<itunes:subtitle>Introduction
It&#x2019;s the year 2015.&#xA0;Officer Jones, a New York City police officer, stops a car because it has a broken taillight.&#xA0;The driver of the car turns out to be a man named Ahmad Abdullah.&#xA0;Abdullah&#x2019;</itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/v/va%20ve/vehicle_search001_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>Introduction</b></p><p><p>It’s the year 2015. Officer Jones, a New York City police officer, stops a car because it has a broken taillight. The driver of the car turns out to be a man named Ahmad Abdullah. Abdullah’s license and registration check out, but he seems nervous, at least to Jones. Jones goes back to his squad car and activates his Raytheon electromagnetic pulse scanner, which can scan the car for weapons and bombs. Nothing shows up on the screen. Nonetheless, he attaches a Global Positioning Device known as a Q-ball underneath the rear bumper as he pretends to be looking at Abdullah’s license plate.</p>
    <p>Over the next several weeks, New York police use the GPS device to track Abdullah’s travels throughout the New York City area. They also watch him take walks from his apartment, relying on public video cameras mounted on buildings and light poles. When cameras cannot capture his meanderings or he takes public transportation or travels in a friend’s car, the police use drone cameras, powerful enough to pick up the numbers on a license plate, to monitor him.  Police interest is piqued when they discover that he visits not only his local mosque but several other mosques around the New York area. They requisition his phone and Internet Service Provider records to ascertain the phone numbers and email addresses of the people with whom he communicates. Through digital sources, they also obtain his bank and credit card records.  For good measure, the police pay the data collection company Choicepoint for a report on all the information about Abdullah that can be gleaned from public records and Internet sources. Finally, since Abdullah tends to leave his windows uncurtained, police set up a Star-Tron—binoculars with nightvision capacity—in a building across the way from Abdullah’s apartment so they can watch him through his window.</p>
    <p>These various investigative maneuvers might lead to discovery that Abdullah is consorting with known terrorists. Or they might merely provide police with proof that Abdullah is an illegal immigrant. Then there’s always the possibility that Abdullah hasn’t committed any crime.   </p>
    <p>The important point for present purposes is that the Constitution has nothing to say about any of the police actions that take place in Abdullah’s case once his car is stopped. The constitutional provision that is most likely to be implicated by the government’s attempts to investigate Adbullah is the Fourth Amendment, which prohibits unreasonable searches of houses, persons, papers and effects, and further provides that, if a warrant is sought authorizing a search, it must be based on probable cause and describe with particularity the place to be searched and the person or thing to be seized. This language is the primary constitutional mechanism for regulating police investigations. The courts have held that, when police engage in a search, they must usually have probable cause—about a 50 percent certainty—that the search will produce evidence of crime, and must also have a warrant, issued by an independent magistrate, if there is time to get one. As construed by the United States Supreme Court, however, these requirements are irrelevant to many modern police practices, including most or all of those involved in Abdullah’s case.  </p>
    <p>The Fourth Amendment’s increasing irrelevance stems from the fact that the Supreme Court is mired in precedent decided in another era. Over the past 200 years, the Fourth Amendment’s guarantees have been construed largely in the context of what might be called “physical searches”—entry into a house or car; a stop and frisk of a person on the street; or rifling through a person’s private papers. But today, with the introduction of devices that can see through walls and clothes, monitor public thoroughfares twenty-four hours a day, and access millions of records in seconds, police are relying much more heavily on what might be called “virtual searches,” investigative techniques that do not require physical access to premises, people, papers or effects and that can often be carried out covertly from far away. As Abdullah’s case illustrates, this technological revolution is well on its way to drastically altering the way police go about looking for evidence of crime. To date, the Supreme Court’s interpretation of the Fourth Amendment has both failed to anticipate this revolution and continued to ignore it.</p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2010/12/08-4th-amendment-slobogin/1208_4th_amendment_slobogin.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li>Christopher Slobogin</li>
		</ul>
	</div><div>
		Image Source: © Brendan McDermid / Reuters
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487898/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487898/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fv%2fva%2520ve%2fvehicle_search001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487898/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a><div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
<item>
<feedburner:origLink>http://www.brookings.edu/research/papers/2010/12/08-biosecurity-wittes?rssid=Future+of+the+Constitution</feedburner:origLink><guid isPermaLink="false">{59F1284F-1B73-4F81-AA13-3CD591B9A639}</guid><link>http://webfeeds.brookings.edu/~/65487899/0/brookingsrss/series/futureoftheconstitution~Innovation%e2%80%99s-Darker-Future-Biosecurity-Technologies-of-Mass-Empowerment-and-the-Constitution</link><title>Innovation’s Darker Future: Biosecurity, Technologies of Mass Empowerment, and the Constitution </title><description><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/b/bf%20bj/biosecurity001_16x9.jpg?w=120" alt="" border="0" /><br /><p><b>Introduction</b></p><p><p>Using gene-splicing equipment available online and other common laboratory equipment and materials, a molecular biology graduate student undertakes a secret project to recreate the smallpox virus. Not content merely to bring back an extinct virus to which the general population is now largely naïve, he uses public source material to enhance the virus’s lethality, enabling it to infect even those whom the government rushes to immunize. His activities raise no eyebrows at his university lab, where synthesizing and modifying complex genomes is even more commonplace and mundane by 2025 than it is today. While time-consuming, the task is not especially difficult. And when he finishes, he infects himself and, just as symptoms begin to emerge, he proceeds to have close contact with as many people from as many possible walks of life as he can in a short time. He then kills himself before becoming ill and is buried by his grieving family with neither they nor the authorities having any idea of his infection.</p>
    <p>
The outbreak begins just shy of two weeks later and seems to come from everywhere at once. Because of the virus’s long incubation period, it has spread far by the time the disease first manifests itself. Initial efforts to immunize swaths of the population prove of limited utility because of the perpetrator’s manipulations of the viral genome. Even efforts to identify the perpetrator require many months of forensic effort. In the meantime, authorities have no idea whether the country—and quickly the world—has just suffered an attack by a rogue state, a terrorist group, or a lone individual. Dozens of groups around the world claim responsibility for the attack, several of them plausibly.</p>
    <p>
The government responds on many levels: It moves aggressively to quarantine those infected with the virus, detaining large numbers of people in the process. It launches a major surveillance effort against the enormous number of people with access to gene synthesis equipment and the capacity to modify viral genomes in an effort to identify future threats from within American and foreign labs. It attempts to restrict access to information and publications about the synthesis and manipulation of pathogenic organisms—suddenly classifying large amounts of previously public literature and blocking publication of journal articles that it regards as high-risk. It requires that gene synthesis equipment electronically monitor its own use, report on attempts to construct sequences of concern to the government, and create an audit trail of all synthesis activity. And it asks scientists all over the world to report on one another when they see behaviors that raise concerns. Each of these steps produces significant controversy and each, in different ways, faces legal challenge.   </p>
    <p>
The future of innovation has a dark and dangerous side, one we dislike talking about and often prefer to pretend does not, in fact, loom before us. Yet it is a side that the Constitution seems preponderantly likely to have to confront—in 2025, at some point later, or tomorrow. There is nothing especially implausible about the scenario I have just outlined—even based on today’s technology. By 2025, if not far sooner, we will likely have to confront the individual power to cause epidemics, and probably other things as well. </p>
    <p>
Technologies that put destructive power traditionally confined to states in the hands small groups and individuals have proliferated remarkably far. That proliferation is accelerating at an awe-inspiring clip across a number of technological platforms. Eventually, it’s going to bite us hard. The response to, or perhaps the anticipation of, that bite will put considerable pressure on constitutional norms in any number of areas.</p>
    <p>
We tend to think of the future of innovation in terms of intellectual property issues and such regulatory policy questions as how aggressive antitrust enforcement ought to be and whether the government should require Internet neutrality or give carriers latitude to favor certain content over other content. Broadly speaking, these questions translate into disputes over which government policies best foster innovation—with innovation presumed to be salutary and the government, by and large, in the position of arbiter between competing market players. 
But confining the discussion of the future of innovation to the relationship among innovators ignores the relationship between innovators and government itself. And government has unique equities in the process of innovation, both because it is a huge consumer of products in general and also because it has unique responsibilities in society at large. Chief among these is security. Quite apart from the question of who owns the rights to certain innovations, government has a stake in who is developing what—at least to the extent that some innovations carry significant capacity for misuse, crime, death, and mayhem.</p>
    <p>
This problem is not new—at least not conceptually. The character of the mad scientist muh-huh-huhing to himself as he swirls a flask and promises, “Then I shall destroy the world!” is the stuff of old movies and cartoons. In literature, versions of it date back at least to Mary Shelley in the early 19th century. Along with literary works set in technologically sophisticated dystopias, it is one of the ways in which our society represents fears of rapidly evolving technology. </p>
    <p>
The trouble is that it is no longer the stuff of science fiction alone. The past few decades have seen an ever-augmenting ability of relatively small, non-state groups to wage asymmetric conflicts against even powerful states. The groups in question have been growing smaller, more diffuse, and more loosely knit, and technology is both facilitating that development and dramatically increasing these groups’ ultimate lethality. This trend is not futuristic. It is already well under way across a number of technological platforms—most prominently the life sciences and computer technology. For reasons I shall explain, the trend seems likely to continue, probably even to accelerate. The technologies in question, unlike the technologies associated with nuclear warfare, were not developed in a classified setting but in the public domain. They are getting cheaper and proliferating ever more widely for the most noble and innocent of reasons: the desire to cure disease and increase human connectivity, efficiency, and capability. As a global community, we are becoming ever more dependent upon these technologies for health, agriculture, communications, jobs, economic growth and development, even culture. Yet these same technologies—and these same dependencies—make us enormously vulnerable to bad actors with access to them. Whereas once only states could contemplate killing huge numbers of civilians with a devastating drug-resistant illness or taking down another country’s power grids, now every responsible government must contemplate the possibility of ever smaller groupings of people undertaking what are traditionally understood as acts of war. We have already seen the migration of the destructive power of states to global non-state actors, particularly Al Qaeda. We can reasonably expect that migration to progress still further. It ultimately threatens to give every individual with a modest education and a certain level of technical proficiency the power to bring about catastrophic damage. Whereas governments once had to contemplate as strategic threats only one another and a select bunch of secessionist militias and could engage with individuals as citizens or subjects, this trend ominously promises to force governments to regard individuals as potential strategic threats. Think of a world composed of billions of people walking around with nuclear weapons in their pockets. </p>
    <p>
If that sounds hyperbolic, it is probably only a little bit hyperbolic. As I shall explain, the current threat landscapes in the life sciences—the area which I use in this paper as a kind of case study—is truly terrifying. (No less so is the cyber arena, an area Jack Goldsmith is treating in detail and where attacks are already commonplace.) The landscape is likely to grow only scarier as the costs of gene synthesis and genetic engineering technologies more generally continue to plummet, as their capacity continues to grow, and as the number of people capable individually or in small groups of deploying them catastrophically continues to expand. The more one studies the literature on biothreats, in fact, the more puzzling it becomes that a catastrophic attack has not yet happened. </p>
    <p>
Yet biothreats alone are not the problem; the full problem is the broader category of threats they represent. Over the coming decades, we are likely to see other areas of technological development that put enormous power in the hands of individuals. The issue will not simply be managing the threat of biological terrorism or biosecurity more broadly. It will be defining a relationship between the state and individuals with respect to the use and development of such dramatically empowering new technologies that both permits the state to protect security and at once insists that it does so without becoming oppressive.</p>
    <p>
To state this problem is to raise constitutional questions, and I’m not entirely sure that a solution to it exists. Governments simply cannot regard billions of people around the world as potential strategic threats without that fact’s changing elementally the nature of the way states and those individuals interact. If I am right that the biotech revolution potentially allows individuals to stock their own WMD arsenals and that other emergent technologies will create similar opportunities, government will eventually respond—and dramatically. It will have no choice.</p>
    <p>
But exactly how to respond—either in reaction or in anticipation—is far from clear. Both the knowledge and the technologies themselves have proliferated so widely to begin with that the cat really is out of the bag. Even the most repressive measures won't suffice to stuff it back in. Indeed, the options seem rather limited and all quite bad: intrusive, oppressive, and unlikely to do much good.</p>
    <p>
And it is precisely this combination of a relatively low probability of policy success, high costs to other interests, and constitutional difficulties that will produce, I suspect, perhaps the most profound change to the Constitution emanating from this class of technologies. This change will not, ironically, be to the Bill of Rights but to the Constitution's most basic assumptions with respect to security. That is, the continued proliferation of these technologies will almost certainly precipitate a significant erosion of the federal government's monopoly over security policy. It will tend to distribute responsibility for security to thousands of private sector and university actors whom the technology empowers every bit as much as it does would-be terrorists and criminals. This point is perhaps clearest in the context of cybersecurity, but it is also true in the biotech arena, where the best defense against biohazards, man-made and naturally occurring alike, is good public health infrastructure and more of the same basic research that makes biological attacks possible. Most of this research is going on in private companies and universities, not in government; the biotech industry is not composed of a bunch of defense contractors who are used to being private sector arms of the state. Increasingly, security will thus take on elements of the distributed application, a term the technology world uses to refer to programs which rely on large numbers of networked computers all working together to perform tasks to which no one system could or would devote adequate resources. While state power certainly will have a role here—and probably an uncomfortable role involving a lot of intrusive surveillance—it may not be the role that government has played in security in the past.</p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://www.brookings.edu/~/media/research/files/papers/2010/12/08-biosecurity-wittes/1208_biosecurity_wittes.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li><a href="http://www.brookings.edu/experts/wittesb?view=bio">Benjamin Wittes</a></li>
		</ul>
	</div><div>
		Image Source: © Jim Young / Reuters
	</div>
</div><div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487899/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fb%2fbf%2520bj%2fbiosecurity001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a><div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</description><pubDate>Wed, 08 Dec 2010 16:13:00 -0500</pubDate><dc:creator>Benjamin Wittes</dc:creator>
<itunes:summary> 
Introduction
Using gene-splicing equipment available online and other common laboratory equipment and materials, a molecular biology graduate student undertakes a secret project to recreate the smallpox virus. Not content merely to bring back an extinct virus to which the general population is now largely na&#xEF;ve, he uses public source material to enhance the virus&#x2019;s lethality, enabling it to infect even those whom the government rushes to immunize. His activities raise no eyebrows at his university lab, where synthesizing and modifying complex genomes is even more commonplace and mundane by 2025 than it is today. While time-consuming, the task is not especially difficult. And when he finishes, he infects himself and, just as symptoms begin to emerge, he proceeds to have close contact with as many people from as many possible walks of life as he can in a short time. He then kills himself before becoming ill and is buried by his grieving family with neither they nor the authorities having any idea of his infection. 
The outbreak begins just shy of two weeks later and seems to come from everywhere at once. Because of the virus&#x2019;s long incubation period, it has spread far by the time the disease first manifests itself. Initial efforts to immunize swaths of the population prove of limited utility because of the perpetrator&#x2019;s manipulations of the viral genome. Even efforts to identify the perpetrator require many months of forensic effort. In the meantime, authorities have no idea whether the country&#x2014;and quickly the world&#x2014;has just suffered an attack by a rogue state, a terrorist group, or a lone individual. Dozens of groups around the world claim responsibility for the attack, several of them plausibly. 
The government responds on many levels: It moves aggressively to quarantine those infected with the virus, detaining large numbers of people in the process. It launches a major surveillance effort against the enormous number of people with access to gene synthesis equipment and the capacity to modify viral genomes in an effort to identify future threats from within American and foreign labs. It attempts to restrict access to information and publications about the synthesis and manipulation of pathogenic organisms&#x2014;suddenly classifying large amounts of previously public literature and blocking publication of journal articles that it regards as high-risk. It requires that gene synthesis equipment electronically monitor its own use, report on attempts to construct sequences of concern to the government, and create an audit trail of all synthesis activity. And it asks scientists all over the world to report on one another when they see behaviors that raise concerns. Each of these steps produces significant controversy and each, in different ways, faces legal challenge. 
The future of innovation has a dark and dangerous side, one we dislike talking about and often prefer to pretend does not, in fact, loom before us. Yet it is a side that the Constitution seems preponderantly likely to have to confront&#x2014;in 2025, at some point later, or tomorrow. There is nothing especially implausible about the scenario I have just outlined&#x2014;even based on today&#x2019;s technology. By 2025, if not far sooner, we will likely have to confront the individual power to cause epidemics, and probably other things as well. 
Technologies that put destructive power traditionally confined to states in the hands small groups and individuals have proliferated remarkably far. That proliferation is accelerating at an awe-inspiring clip across a number of technological platforms. Eventually, it&#x2019;s going to bite us hard. The response to, or perhaps the anticipation of, that bite will put considerable pressure on constitutional norms in any number of areas. 
We tend to think of the future of innovation in terms of intellectual property issues and such regulatory policy questions as how aggressive ... </itunes:summary>
<itunes:subtitle>Introduction
Using gene-splicing equipment available online and other common laboratory equipment and materials, a molecular biology graduate student undertakes a secret project to recreate the smallpox virus. Not content merely to bring back an ... </itunes:subtitle><content:encoded><![CDATA[<div>
	<img src="http://www.brookings.edu/~/media/research/images/b/bf%20bj/biosecurity001_16x9.jpg?w=120" alt="" border="0" />
<br><p><b>Introduction</b></p><p><p>Using gene-splicing equipment available online and other common laboratory equipment and materials, a molecular biology graduate student undertakes a secret project to recreate the smallpox virus. Not content merely to bring back an extinct virus to which the general population is now largely naïve, he uses public source material to enhance the virus’s lethality, enabling it to infect even those whom the government rushes to immunize. His activities raise no eyebrows at his university lab, where synthesizing and modifying complex genomes is even more commonplace and mundane by 2025 than it is today. While time-consuming, the task is not especially difficult. And when he finishes, he infects himself and, just as symptoms begin to emerge, he proceeds to have close contact with as many people from as many possible walks of life as he can in a short time. He then kills himself before becoming ill and is buried by his grieving family with neither they nor the authorities having any idea of his infection.</p>
    <p>
The outbreak begins just shy of two weeks later and seems to come from everywhere at once. Because of the virus’s long incubation period, it has spread far by the time the disease first manifests itself. Initial efforts to immunize swaths of the population prove of limited utility because of the perpetrator’s manipulations of the viral genome. Even efforts to identify the perpetrator require many months of forensic effort. In the meantime, authorities have no idea whether the country—and quickly the world—has just suffered an attack by a rogue state, a terrorist group, or a lone individual. Dozens of groups around the world claim responsibility for the attack, several of them plausibly.</p>
    <p>
The government responds on many levels: It moves aggressively to quarantine those infected with the virus, detaining large numbers of people in the process. It launches a major surveillance effort against the enormous number of people with access to gene synthesis equipment and the capacity to modify viral genomes in an effort to identify future threats from within American and foreign labs. It attempts to restrict access to information and publications about the synthesis and manipulation of pathogenic organisms—suddenly classifying large amounts of previously public literature and blocking publication of journal articles that it regards as high-risk. It requires that gene synthesis equipment electronically monitor its own use, report on attempts to construct sequences of concern to the government, and create an audit trail of all synthesis activity. And it asks scientists all over the world to report on one another when they see behaviors that raise concerns. Each of these steps produces significant controversy and each, in different ways, faces legal challenge.   </p>
    <p>
The future of innovation has a dark and dangerous side, one we dislike talking about and often prefer to pretend does not, in fact, loom before us. Yet it is a side that the Constitution seems preponderantly likely to have to confront—in 2025, at some point later, or tomorrow. There is nothing especially implausible about the scenario I have just outlined—even based on today’s technology. By 2025, if not far sooner, we will likely have to confront the individual power to cause epidemics, and probably other things as well. </p>
    <p>
Technologies that put destructive power traditionally confined to states in the hands small groups and individuals have proliferated remarkably far. That proliferation is accelerating at an awe-inspiring clip across a number of technological platforms. Eventually, it’s going to bite us hard. The response to, or perhaps the anticipation of, that bite will put considerable pressure on constitutional norms in any number of areas.</p>
    <p>
We tend to think of the future of innovation in terms of intellectual property issues and such regulatory policy questions as how aggressive antitrust enforcement ought to be and whether the government should require Internet neutrality or give carriers latitude to favor certain content over other content. Broadly speaking, these questions translate into disputes over which government policies best foster innovation—with innovation presumed to be salutary and the government, by and large, in the position of arbiter between competing market players. 
But confining the discussion of the future of innovation to the relationship among innovators ignores the relationship between innovators and government itself. And government has unique equities in the process of innovation, both because it is a huge consumer of products in general and also because it has unique responsibilities in society at large. Chief among these is security. Quite apart from the question of who owns the rights to certain innovations, government has a stake in who is developing what—at least to the extent that some innovations carry significant capacity for misuse, crime, death, and mayhem.</p>
    <p>
This problem is not new—at least not conceptually. The character of the mad scientist muh-huh-huhing to himself as he swirls a flask and promises, “Then I shall destroy the world!” is the stuff of old movies and cartoons. In literature, versions of it date back at least to Mary Shelley in the early 19th century. Along with literary works set in technologically sophisticated dystopias, it is one of the ways in which our society represents fears of rapidly evolving technology. </p>
    <p>
The trouble is that it is no longer the stuff of science fiction alone. The past few decades have seen an ever-augmenting ability of relatively small, non-state groups to wage asymmetric conflicts against even powerful states. The groups in question have been growing smaller, more diffuse, and more loosely knit, and technology is both facilitating that development and dramatically increasing these groups’ ultimate lethality. This trend is not futuristic. It is already well under way across a number of technological platforms—most prominently the life sciences and computer technology. For reasons I shall explain, the trend seems likely to continue, probably even to accelerate. The technologies in question, unlike the technologies associated with nuclear warfare, were not developed in a classified setting but in the public domain. They are getting cheaper and proliferating ever more widely for the most noble and innocent of reasons: the desire to cure disease and increase human connectivity, efficiency, and capability. As a global community, we are becoming ever more dependent upon these technologies for health, agriculture, communications, jobs, economic growth and development, even culture. Yet these same technologies—and these same dependencies—make us enormously vulnerable to bad actors with access to them. Whereas once only states could contemplate killing huge numbers of civilians with a devastating drug-resistant illness or taking down another country’s power grids, now every responsible government must contemplate the possibility of ever smaller groupings of people undertaking what are traditionally understood as acts of war. We have already seen the migration of the destructive power of states to global non-state actors, particularly Al Qaeda. We can reasonably expect that migration to progress still further. It ultimately threatens to give every individual with a modest education and a certain level of technical proficiency the power to bring about catastrophic damage. Whereas governments once had to contemplate as strategic threats only one another and a select bunch of secessionist militias and could engage with individuals as citizens or subjects, this trend ominously promises to force governments to regard individuals as potential strategic threats. Think of a world composed of billions of people walking around with nuclear weapons in their pockets. </p>
    <p>
If that sounds hyperbolic, it is probably only a little bit hyperbolic. As I shall explain, the current threat landscapes in the life sciences—the area which I use in this paper as a kind of case study—is truly terrifying. (No less so is the cyber arena, an area Jack Goldsmith is treating in detail and where attacks are already commonplace.) The landscape is likely to grow only scarier as the costs of gene synthesis and genetic engineering technologies more generally continue to plummet, as their capacity continues to grow, and as the number of people capable individually or in small groups of deploying them catastrophically continues to expand. The more one studies the literature on biothreats, in fact, the more puzzling it becomes that a catastrophic attack has not yet happened. </p>
    <p>
Yet biothreats alone are not the problem; the full problem is the broader category of threats they represent. Over the coming decades, we are likely to see other areas of technological development that put enormous power in the hands of individuals. The issue will not simply be managing the threat of biological terrorism or biosecurity more broadly. It will be defining a relationship between the state and individuals with respect to the use and development of such dramatically empowering new technologies that both permits the state to protect security and at once insists that it does so without becoming oppressive.</p>
    <p>
To state this problem is to raise constitutional questions, and I’m not entirely sure that a solution to it exists. Governments simply cannot regard billions of people around the world as potential strategic threats without that fact’s changing elementally the nature of the way states and those individuals interact. If I am right that the biotech revolution potentially allows individuals to stock their own WMD arsenals and that other emergent technologies will create similar opportunities, government will eventually respond—and dramatically. It will have no choice.</p>
    <p>
But exactly how to respond—either in reaction or in anticipation—is far from clear. Both the knowledge and the technologies themselves have proliferated so widely to begin with that the cat really is out of the bag. Even the most repressive measures won't suffice to stuff it back in. Indeed, the options seem rather limited and all quite bad: intrusive, oppressive, and unlikely to do much good.</p>
    <p>
And it is precisely this combination of a relatively low probability of policy success, high costs to other interests, and constitutional difficulties that will produce, I suspect, perhaps the most profound change to the Constitution emanating from this class of technologies. This change will not, ironically, be to the Bill of Rights but to the Constitution's most basic assumptions with respect to security. That is, the continued proliferation of these technologies will almost certainly precipitate a significant erosion of the federal government's monopoly over security policy. It will tend to distribute responsibility for security to thousands of private sector and university actors whom the technology empowers every bit as much as it does would-be terrorists and criminals. This point is perhaps clearest in the context of cybersecurity, but it is also true in the biotech arena, where the best defense against biohazards, man-made and naturally occurring alike, is good public health infrastructure and more of the same basic research that makes biological attacks possible. Most of this research is going on in private companies and universities, not in government; the biotech industry is not composed of a bunch of defense contractors who are used to being private sector arms of the state. Increasingly, security will thus take on elements of the distributed application, a term the technology world uses to refer to programs which rely on large numbers of networked computers all working together to perform tasks to which no one system could or would devote adequate resources. While state power certainly will have a role here—and probably an uncomfortable role involving a lot of intrusive surveillance—it may not be the role that government has played in security in the past.</p></p><h4>
		Downloads
	</h4><ul>
		<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/~/media/research/files/papers/2010/12/08-biosecurity-wittes/1208_biosecurity_wittes.pdf">Download the Full Paper</a></li>
	</ul><div>
		<h4>
			Authors
		</h4><ul>
			<li><a href="http://webfeeds.brookings.edu/~/t/0/0/brookingsrss/series/futureoftheconstitution/~www.brookings.edu/experts/wittesb?view=bio">Benjamin Wittes</a></li>
		</ul>
	</div><div>
		Image Source: © Jim Young / Reuters
	</div>
</div><Img align="left" border="0" height="1" width="1" alt="" style="border:0;float:left;margin:0;padding:0" hspace="0" src="http://webfeeds.brookings.edu/~/i/65487899/0/brookingsrss/series/futureoftheconstitution">
<div style="clear:both;padding-top:0.2em;"><a title="Like on Facebook" href="http://webfeeds.brookings.edu/_/28/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/fblike20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Share on Google+" href="http://webfeeds.brookings.edu/_/30/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/googleplus20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Pin it!" href="http://webfeeds.brookings.edu/_/29/65487899/BrookingsRSS/series/futureoftheconstitution,http%3a%2f%2fwww.brookings.edu%2f~%2fmedia%2fresearch%2fimages%2fb%2fbf%2520bj%2fbiosecurity001_16x9.jpg%3fw%3d120"><img height="20" src="http://assets.feedblitz.com/i/pinterest20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Tweet This" href="http://webfeeds.brookings.edu/_/24/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/twitter20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by email" href="http://webfeeds.brookings.edu/_/19/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/email20.png" style="border:0;margin:0;padding:0;"></a>&#160;<a title="Subscribe by RSS" href="http://webfeeds.brookings.edu/_/20/65487899/BrookingsRSS/series/futureoftheconstitution"><img height="20" src="http://assets.feedblitz.com/i/rss20.png" style="border:0;margin:0;padding:0;"></a><div style="padding:0.3em;">&nbsp;</div>&#160;</div>]]>
</content:encoded></item>
</channel></rss>

