Vrendenburgh argues that society ought to compel institutions to provide individuals with an explanation of the various decisions the institution makes with regards to the individual (e.g. the acceptance/rejection of an application for financial aid), in order to preserve individuals' ability to advocate for themselves as they interact with the institution. This is more difficult for an institution to achieve if it uses a "black-box" algorithm in the course of its decision-making processes.
by Aivodji et al. Some AI systems which are used in decision-making processes are able to produce (ostensible) explanations of their decisions along with their conclusions. However, if the explanation-producer is also a "black-box," it is possible that it is really searching for apparently fair explanations of unfair results. Combined with many people's feelings that computer systems are objective, one might worry that institutions will make unfair decisions and "fairwash" their explanations, whether they are aware that they are doing this or not.
Hancox-Li highlights one feature that we would require of AI's that produce explanations, "robustness:" that given similar inputs it should produce similar outputs (e.g., if only the name on a resume changes, the AI will give the same decision about whether to invite the candidate for an interview AND give the same explanation for why it made that recommendation)
Beigang distinguish two notions of fairness which we might want from decision processes involving algorithms: 1) that any prediction that the algorithm makes is accurate, and 2) that it's recommendations result in a just distribution of goods
by Bricken et al. at Anthropic. Often the behavior of individual neurons in a neural net is inscrutable, as they are often activated in responses to apparently unrelated features of the input. This is called "polysemanticity." The Anthropic team has made progress in identifying configurations of neurons which are monosemantic even though individual neurons are polysemantic, which is a step towards explainability.
OpenAI's program to attempt to ensure the alignment of superintelligent AI with human intent in the next four years, dedicating 20% of their compute to this goal
Describes the large amount of electricity and water (for cooling) used for training AI systems. This issue is increasingly of concern in a variety of areas requiring significant computing power, such as AI and cryptocurrency.
Wall Street Journal article on Altman's attempt to raise $7 trillion to expand the world's chip-building capacity, as there aren't enough chips to meet the AI industries demand. This is a global-economy-shifting amount of money--for example, it is more than the combined value of Microsoft and Apple. The project involves increasing global supply chain, energy, and data center infrastructure to support massively increased chip production.
Details problems with a dataset of books used in the training of many prominent AI systems, primarily copyright issues involving the books in the dataset and bias towards particular sorts of books/authors/etc
The New York Times argues that chatbots like ChatGPT amount to copyright infringement as they often relay New York Times reporting to users and hallucinate New York Times citations, among other things
Crowdwork
Work Without the Worker by Phil Jones
ISBN: 9781839760433
Publication Date: 2021-10-05
An accessible analysis of the new forms of work whose seismic changes will increasingly determine the future of capitalism. Automation and the decline in industrial employment have lead to rising fears of a workless future. But what happens when your work itself is the thing that will make your job obsolete? In the past few years, online crowdworking platforms - like Amazon's Mechanical Turk and Clickworker - have become an increasingly important source of work, particularly for those in the Global South. Here, small tasks are assigned to people online, and are often used to train algorithms to spot patterns, patterns through machine learning those same algorithms will then be able to spot more effectively than humans. Used for everything from the mechanics of self-driving cars to Google image search, this is an increasingly powerful part of the digital ecomomy. But what happens to work when it makes itself obsolete. In this stimulating work that blends political economy, studies of contemporary work, and speculations on the future of capitalism, Phil Jones looks at what this often murky and hidden form of labour looks like, and what it says about the state of global capitalism.
A look at the growing microtask industry, with particular attention to the different wages being offered in different locations and to different kinds of taskers, as well as the precarious nature of the work
by Zyskowski et al. Includes interviews of crowdworkers with disabilities. Some people with disabilities have begun doing crowdwork as it often has a more flexible schedule than normative office work, and does not require a commute.
Describes the traumatic nature of the work needed to label text as violence, hate speech, sexual abuse, etc. in order to train ChatGPT, which is often performed by workers in poor countries earning low wages. (contains descriptions of sexual abuse) See also the "Crowdwork" box in the "Social Implications" tab