Wilneida Negrón, Technology Fellow, Gender, Racial, and Ethnic Justice
Ford Foundation is helping develop a path for people to use their technology skills for the public good. Learn more about Public Interest Tech.
The resilience and fragility of 2017 has been met with increased instability, polarization, and crackdowns on democratic principles in 2018. Still, activists around the world continue to push for the protection and expansion of human and civil rights. At the same time, technology continues to both support and undermine social justice, adding a layer of complexity for the field.
Last year, I identified 10 tech trends impacting social justice. Many will continue to pose challenges in 2018, and a new year brings new emerging threats and trends at the nexus of technology and social justice. Here are a few that stand out:
Digital rights, including privacy protections and freedom of expression, are being fought at all levels of government. The EU will soon implement new privacy laws with implications for citizens around the world. While in the U.S., the Supreme Court Case Carpenter v. United States is re-examining American rights to digital privacy and is considered one of the most important electronic privacy cases of the 21st century. The end of net neutrality has shifted to the state-level (California, New York, Oregon, Illinois, Maryland, Minnesota, Montana, New Jersey are at the forefront) where legislation is being sought to hold Internet Service Providers (ISPs) accountable for discriminatory practices. Each one of these fights will be critical to preserving digital rights.
2017 brought some of the largest data security breaches in the U.S., including Equifax, Yahoo, WannaCry, the voter data breach, and others. They demonstrated the severity of cybersecurity risks as well as governments’ and private sectors’ failure to protect citizens. With the threat of more state-sponsored cyber-attacks, voter data breaches, hackable online voting registration systems, and the U.S.’s first digital 2020 Census - the public may soon start demanding greater accountability and protections from governments and the private sector.
Recent shifts in the online media ecosystem relying increasingly on bots, “filter bubbles”, and search engine manipulation, and growing cases of online harassment, bias, and disinformation have highlighted the vulnerabilities of democratic societies in the Digital Age. While there is a need to find technical, legal, and policy solutions to these growing socio-technical problems, technology companies will face increased pressure to work alongside civil society to monitor, prevent, and resolve these issues. Therefore, in 2018, shareholders and government regulators may usher in a new era of corporate online responsibility.
Technology has become essential to campaigning, electoral politics, social movements, and legislative change. Bernie Sander’s tech-driven Big Organizing, Indivisible, #KnockEveryDoor, Run For Something, and the Women’s March, #MeToo, #TimesUp, and #NiUnaMas, as well as many others, show how spaces for political change have also become incubators for tech innovation and prototyping. Yet, rather than embrace an era of ongoing innovation, in 2018 civil society may take a more critical look at the impact of these platforms and tech-driven civic engagement models for building and sustaining political power in the Digital Age. The upcoming midterm elections in the U.S. and critical presidential elections around the world in countries such as Egypt, Mexico, and Brazil this year, will provide opportunities to critically look at the role of political tech in electoral politics.
Governments around the world are increasingly purchasing and relying on private sector tech-enabled and data-driven technologies. These technologies are being used for everything from welfare and benefits delivery to policing and sentencing and immigration and border enforcement. While these technologies may offer greater efficiency and effectiveness, they are also escalating surveillance and censorship in 2017. In 2018, we’ll continue to see technology solutions, data brokerage, and data analysis vendors selling new services and tools to government and the public sector - resulting in heightened focus on unveiling profit-seeking ventures in government procurement of private sector technologies. Meanwhile, city and state-level mobilizations working on passing legislation to increase the transparency of government procurement of technologies may offer key lessons for the field.
In 2018 we’ll see the expansion of automation and prediction through the use of predictive analytics, machine learning technologies, Internet of Things (IoT), and other intelligent infrastructures as well as increasing awareness of the vast private and public-sector ecosystems through which our personal data is collected, sold, and used by social media platforms, governments, and third-party companies. As a result, we’ll see a growing focus on data governance, including a greater need to define responsible data principles, rights and protections for vulnerable communities, and new systems to regulate data collection at the front end and its use on the backend.
Two trends are currently merging with implications for the future of the Internet. Concerns abound about the end of net neutrality, profit-driven models of social media platforms, and online misinformation. At the same time groups are experimenting with ways to enable new models of local and decentralized forms of Internet and peer to peer (P2P) collaboratives through cryptocurrencies, like Bitcoin, and Blockchain. 2018 will see greater research and experimentation of how these technologies can provide alternatives to an increasingly centralized and monopolistic online environment.
Biometric technologies such as facial recognition are currently used by governments around the world in policing, border and immigration enforcement, national identification systems, rural credit markets, and public service delivery. Most recently in the U.S., JetBlue and Delta tested facial recognition boarding and biometric exit immigration procedures. The seemingly innocuous ways that biometric technologies are being implemented in daily life are normalizing their practice and use, while avoiding the difficult questions of transparency, accountability, and data ownership and governance highlighted earlier.
Even though most organizations and companies are still in the early days of machine learning and artificial technologies (AI) implementation – private sector leaders and civil society groups are beginning to explore the risks and potential for social good of these technologies. In 2017, more and more academic institutions launched research centers focused on fostering cross-disciplinary explorations on the social implications of these technologies as well as developing policy proposals and industry-specific best practices to guide the use and deployment of them. In 2018, we’ll continue to see a growth in research in this space, as well as begin to see how policymakers and activists use this research.
The growth of automation and digital platforms in the workforce and labor markets brings the promise of greater efficiency, productivity, and opportunity. Yet, the broader labor market realities (e.g. wage stagnation, fissuring, inequality, immigration) in which these technologies are deployed must be closely examined. Helping societies adapt to the changing nature of work will require multi-stakeholder engagements among policymakers, researchers, technology firms, civil society, and the general public. These areas of work are starting to be defined and in 2018 we may see the beginnings of key policy proposals.
A special thanks to my colleagues, Alberto Cerda, Ritse Erumi, and Michelle Shevin whose suggestions and guidance made this list possible.