Society Ethics Technology
(SET) Group

Society Ethics Technology (SET) is research subgroup within Orthogonal Research and Education Laboratory (OREL). Founded officially in January 2021, the SET team at Orthogonal Research and Education Lab has its roots in cross-disciplinary collaborations on technology, its impact on everyday life, and regulation affecting such. Topics of interest include: open source, AI Ethics, AI Safety, law (particularly AI interpretability & explainability), shared risk, technology-education  access, and neuroethics. SET was founded with the intention of creating space for communication & collaboration between those creating technology directly and those interfacing with technology in various other domains.


Sustainable Auditing Tool (SAT): Ethically Sustainable Engagement for Open Source Communities

Open-source Sustainability: An Agent-Based, Cybernetic Approach

Brian McCorkle, Hussain Ather, Himanshu Chougule, Jesse Parent, Bradly Alicea | Bonn Sustainable AI  2023

How can open source communities monitor, track, and encourage engagement? How can key information not be lost as turnover or different parties engage with open source material? How can these questions be answered with different ethical lenses and priorities in mind? We present our Sustainability Auditing Tool (SAT) takes computational models of Open Source Development communities and allows members and administrators to link their organization’s GitHub information with those models, populating theoretical models with real-world data.

Our project stems from research and experimentation conducted over the Summer of 2022 as part of Google Summer of Code via Orthogonal Research and Education Lab via the  International Neuroinformatics Coordinating Facility. 

The models have all been created with a focus on ethical and sustainable practices, and can help those working in Open Source to see burn-out coming in advance, or to experiment with different priorities for their organization.  We are currently implementing the models generated over the Summer of Code period now into a more user-friendly and engagement-ready website, and look to continue to expand and develop these practices in the future. Using NetLogo and  Python integrations to create agent-based models, we also aim to enable collective cognition models that help communities understand the nature of their userbase, and how to further sustainable, ethically aware engagement over time.

see also: Sustainability Auditing Tool

Beyond Ethics-Washing: A Review of Tech Ethics Boards and DEI Initiatives

Jesse Parent, Jennifer Jiang, Bradly Alicea | (Forthcoming)

Ethics boards at Big Tech groups have been under heavy scrutiny in the last several years. Tensions arise over the superficiality of these boards, as concepts like "Do No Evil" (Google), the dissolution of these groups outright, or the questions that arise as organizations turn towards more profit-oriented modes of operation (OpenAI). A congruent set of challenges appear to be arising in various Diversity, Equity, and Inclusivity (DEI) initiatives across sectors; "The Great Breakup" an example of women who are dissatisfied with workplace environments, or being overlooked for their DEI efforts.

In this review paper, we review recent trends in the success or failure of both ethics boards and DEI practices. We reflect on criticism of ‘ethics-washing’, or making ineffectual gestures towards ethics practices that are more for show or marketing purposes than actual impact. We particularly investigate whether or not action or input from ethics or DEI groups are incorporated into an organization’s feedback cycles and enable a sense of agency or buy-in from its constituents.

In addition to our literature review, we are conducting interviews and surveys with professionals in and around these regulatory boards: executives who are leading the design, participants who have a say in them, and those who have been affected by their decisions. Through these discussions, and in combination with our research on case studies of Tech Ethics Boards and DEI initiatives, we are developing a set of Best Practices to inform both management design and participant activity within these regulatory settings.

Key Concepts in Tech Careers (and Tech Ethics) in 2023 - AI-Generated Content, Open Source + Open Data, and Beyond

Jesse Parent, Valeria Schnake, Ankit Grover, Aiden Tripodi, Jennifer Jiang, Bradly Alicea | NYCWiC 2023

AI-generated content is changing norms across industries: from Dall-E’s artistic creations, GitHub Co-Pilot’s AI pair-programming guidance, interpersonally insightful chat bots like Replika and Woebot, and now ChatGPT wide-ranging outputs and what’s next for LLMs. We discuss these topics, including implications for tech law, asking what should aspiring technologists, educators, or artists be aware of? 

Similarly, Open Source and Open Data are major movements within software development and data management. A NeurIPS keynote spoke of a data-centric era where ML may be becoming an experimental science. The NIH has issued the Data Management and Sharing (DMS) policy promoting the sharing and management of scientific data, with speakers at Society for Neuroscience 2022 emphasizing how graduate students will be on the front line of these large-scale changes, no less their impact grants and other funding. Novel technological advances often require specific maintenance of software and firmware - but what happens when the updates stop, especially in open source projects?

We will tour these real-world topics alongside our efforts in Sustainable Open Source Ethics & Communities, including reviewing our Sustainability Auditing Tool for developers, analyzing the potential for Ethical Regulators as parts of technology development, and other key issues in accessibility and AI/Tech ethics that are shaping the landscape for those in technology-centered careers.

Our Virtual Lives and Digital Companionship: Opportunities and Challenges of Shared Experience in an Augmented Century

Jesse Parent, Avery Lim, Amanda Nelson, Jenny Liu Zhang, Bradly Alicea | WeRobot2022

For much of dominant scientific and cultural history, in disciplines studying human experience, the mind was seen as independent from its environment, and our means for interfacing with other entities was limited to our immediate physical surroundings. Yet in the 21st century, paradigms and lived experiences have undergone significant transformations. Given the amount of communication in which we engage via technological interfaces, and with or through digital or augmented means of companionship, how do we interpret and navigate our personal experience? Here we examine broader societal and cultural forces at play in an era of myriad sources of information and opportunities for stimulation, from interest-based digital communities, "fake news", ever-enhanced artificial assistants, and nuanced robots and interactive agents. We consider a broad contextual scope concerning pressures from existing social and power dynamics, and examine the means of mapping such influence in our own lives. We also propose a new methodology for exploring one's personal experience, moving throughout various degrees of hybrid digital existences. With lenses such as neurodiversity and variety of lived experience; technology-reinforced power structures; and means by which to comprehending one's experience is influenced, we provide an overview of our recent work at the nexus of society, technology, and ethics.

Frontiers in Data Privacy and Tech Ethics

Jesse Parent, A. R. Ordis, Brian McCorkle, Nick Truelove, Amaris Arias, Bradly Alicea | NYCWiC 2022

As mediums for human experience become increasingly technologically-centered, the role of data and its associated security and ethical handling becomes increasingly influential. We present a number of different projects in our lab via the lens of frontiers of data privacy and technology ethics: what are critical areas of challenge and opportunity? A centerpiece of our recent work has been in investigating the applicability and novel mitigation strategies afforded by data trusts - structures wherein a board of trustees takes responsibility for the data associated with its beneficiaries. How, and when, should data trusts be enacted? A counterpart to data management is the ethics behind data use, aggregation, collection, and evaluation. “Bias in AI”, or “AI Ethics”, are significant concepts - but what are the battlegrounds these concerns play out on? We examine the role of causal inference modeling, mechanism design, and consider key domains like hiring technology and norms in academic discourse. Combined with the context of major recent events in the ethics of Big Tech, we provide pathways for future research in the realm of technology, society, and the data which permeates life in the 21st century.

Towards a Unified Ethical Framework for AI: Incorporating Critical Voices

Jesse Parent, Krishna Katyal, Shruti Raj Vansh Singh, Nick Truelove, Minh Tran, Daniela Cialfi, Bradly Alicea | NYCWiC 2021

As artificial intelligence and automation of critical decision making becomes more ubiquitous, society’s need for systematic attempts to guide responsible engagement and implementation of these tools deepens. But in a world of preexisting and diverse approaches to ethics and culture already, how can we strive towards a coherent framework to guide the rapid advances in AI? This is no trivial task, but to make headway on this path, we will discuss key concepts influencing the development of ethical AI frameworks, including: the relationship between technology implementation and lagging law updates; the trouble of “neutrality”; historical influences and impacts on norms; automated decision-making; and examination of what broad values can guide a framework for the 21st century. Our aim will be to lay out an overview of the challenges of developing a coherent ethical framework, note challenges for buy-in and implementation, and make suggestions of fruitful areas of further research.

How Will AI + Tech Ethics Affect Our Careers: An AI Ethics Survey

Jesse Parent, A. R. Ordis, Avery Lim, Anna Wang, Bradly Alicea | NYCWiC 2020

We discuss challenges and opportunities across industries regarding ethical issues in the use and development of AI and big data. We draw from personal experience and interviews in academia, information technology, cybersecurity, social work, entrepreneurship, research and law. Findings are compared to projections of technology evolution within those industries. We represent a diverse range of investigators across age, gender, sexuality, industry, experience, and technical expertise - unified in analysis of how technology will impact our future careers. Through this lens, we aim to provide a scouting report on navigating ever-more-technical workplaces with insight for how to ethically engage in 21st century careers.

AI and Equality: a Multidisciplinary Approach to Confronting AI Bias

Jesse Parent, Valeria Schnake | NYCWiC 2019

We map the trajectory of reliance on AI decision making and evaluation, identifying challenges and opportunities for beneficial AI. From both a technical and ethical perspective, we examine the current impact AI has on society as well as potential threats to justice and equality in the future.

Research Group Members & Opportunities

Prospective Lab Members

Society Ethics Technology (SET) is a subgroup within Orthogonal Research and Education Lab (OREL), and will require acceptance into OREL in order to work with our team. OREL & SET are inclusive communities, with a particular emphasis on interdisciplinary research, combined with support and mentoring for early career researchers. 

If you are interested in joining our research group, reach out to OREL Director Bradly Alicea (bradly.alicea @  and SET Program Manager Jesse Parent (jesse @ Please provide a small statement of research interest indicating topics or projects you'd like to work on, alongside your CV.

We are currently accepting 2023 internship applications.

Last Updated: April 15, 2023

Current Members

Alumni & External Partners