Automation doesn’t happen on the job-level, but the task-level.
Automation doesn’t strictly mean human replacement, but also augmenting given tasks.
Automation will always need maintenance, updates, and upgrades. There is not a date X, where the process is finished.
Automation happens, if it‘s economically beneficial, not if it‘s technically feasible. It‘s — of course — the product and producer of capitalism.
Keeping this in mind, almost every new technology can be placed in one of the following quadrants.
Course: Artificial Intelligence (Fall 2015) Prof. Patrick Henry Winston. MIT.
A course including 12 video lectures (about 50 minutes each) on the technical details of AI. I especially found the two lectures on neural nets helpful, which give a great introduction on the topic.
Jason’s Machine Learning 101 (December, 2017) A great an in depth overview over AI, machine learning and deep learning and how all those things actually work, as well as some notes on limitations and challenges.
A Berkeley View of Systems Challenges for AI (December 15, 2017) Ion Stoica, Dawn Song, Raluca Ada Popa, David Pa erson, Michael W. Mahoney, Randy Katz, Anthony D. Joseph, Michael Jordan, Joseph M. Hellerstein, Joseph Gonzalez, Ken Goldberg, Ali Ghodsi, David Culler, Pieter Abbeel. Berkley EECS.
Deep Learning: A Critical Appraisal (December 18., 2017). Marcus, Gary. New York University.
Gary Marcus’ view on deep learning is more critical, than the Berkeley overview. He argues, that deep learning wont be enough to ever reach a general artificial intelligence. There are definite limits to the technique.
Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.
A summary can be found in this Wired Article Wise up, deep learning may never create a general purpose AI (January 20., 2018) by Greg Williams.
Innateness, AlphaZero, and Artificial Intelligence. (n.a.). Marcus, Gary. New York University
In this paper, I consider as a test case a recent series of papers by Silver et al (Silver et al., 2017a) on AlphaGo and its successors that have been presented as an argument that a “even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance”, “starting tabula rasa.” I argue that these claims are overstated, for multiple reasons. I close by arguing that artificial intelligence needs greater attention to innateness, and I point to some proposals about what that innateness might look like.
Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning (February 02., 2018). Pontin, Jason. Wired.
A Wired article, which bundles most of the criticism and known limits of DL/ML.
We’ve been promised a revolution in how and why nearly everything happens. But the limits of modern artificial intelligence are closer than we think.
Troubling Trends in Machine Learning Scholarship(July 10., 2018). Zachary C. Lipton & Jacob Steinhardt. Approximately Correct.
Litpon and Steinhardt examined rhetorical and narrative patterns in machine learning papers, published by scientists. While all are fairly harmless, they nethertheless can irritate a casual reader and lead to a missrepresentation in articles.
Flawed scholarship threatens to mislead the public and stymie future research by compromising ML’s intellectual foundations. Indeed, many of these problems have recurred cyclically throughout the history of artificial intelligence and, more broadly, in scientific research. (…) By promoting clear scientific thinking and communication, we can sustain the trust and investment currently enjoyed by our community.
What can machine learning do? Workforce implications (December 22, 2017). Erik Brynjolfsson, Tom Mitchell. Science.
A great article on the characteristics a task needs to have, to be automated by a machine learning system.
Although recent advances in the capabilities of ML systems are impressive, they are not equally suitable for all tasks. The cur- rent wave of successes draw particularly heavily on a paradigm known as supervised learning, typically using DNNs. They can be immensely powerful in domains that are well suited for such use. However, their competence is also dramatically narrower and more fragile than human decision-making, and there are many tasks for which this approach is completely ineffective
The Wired Brain: How not to talk about an AI-powered Future (March 9., 2017). Montanani, Ines.
Some good thoughts about the representation of AI in the media and ideas on how to better report on AI.
The new opportunities emerging from Machine Learning are very tempting. The wildest scenarios from decades of futuristic science fiction are suddenly (almost) possible. But does that mean they’re actually the most practical things to build? The problem here lies in how those fictional ideas were conceived. The way people imagine technology of the future is heavily biased by their current experiences and expectations.
The Smart, the Stupid, and the Catastrophically Scary: An Interview with an Anonymous Data Scientist (November 2016). n.A.. Logic Magazine, Issue 01: Intelligence.
An Interview with an anonymous data scientists on the relationship between big data and machine learning, the artifical marketing hype and limits of the technology. Highly recommended reading.
Situating Methods in the Magic of Big Data and Artificial Intelligence (September 20, 2017). Elish, M. C. and Boyd, Danah. Communication Monographs.
Elish and Boyd explore the myth around Big Data and AI as “magical” technologies. The article is worth reading for an insight in the business of AI and Big Data, as well as a critical perspecitve on the real possibilities and challenges of both technologies. A shorter versiom is available on Medium.
The uncritical embrace of AI technologies has troubling implications for established forms of accountability, and for the protection of our most vulnerable populations. AI is increasingly being positioned as the answer to every question, in part because AI seems to promise not only efficiency and insight, but also neutrality and fairness — ideals that are often viewed as impossible to achieve through individual human or organizational decision-making processes. The fantasies and promises of AI often obscure the limitations of the field and the complicated trade-offs of technical work done under the rubric of “AI.”
Imagining the thinking machine: Technological myths and the rise of artificial intelligence (June 20, 2017). Simone Natale, Andrea Ballatore. Convergence.
Similar to Elish and Boyd explore Natale and Ballatore three narratives and myths around AI and the consequences of those. Focussed more on the role media plays in forming the public imaginations around “the thinking machine”, this paper is worth a look for media professionals.
Based on a content analysis of articles on AI published in two magazines, the Scientific American and the New Scientist, which were aimed at a broad readership of scientists, engineers and technologists, three dominant patterns in the construction of the AI myth are identified: (1) the recurrence of analogies and discursive shifts, by which ideas and concepts from other fields were employed to describe the functioning of AI technologies; (2) a rhetorical use of the future, imagining that present shortcomings and limitations will shortly be overcome and (3) the relevance of controversies around the claims of AI, which we argue should be considered as an integral part of the discourse surrounding the AI myth.
Manufacturing an Artificial Intelligence Revolution (November 27, 2017). Katz, Yarden.
Definitely one of the most critical views on AI I’ve read so far. I am not sure, if I agree with every point Katz raises, but it‘s a perspective worth considering.
I argue here that the “AI” label has been rebranded to promote a contested vision of world governance through big data. Major tech companies have played a key role in the rebranding, partly by hiring academics that work on big data (which has been effectively relabeled “AI”) and helping to create the sense that super-human AI is imminent. However, I argue that the latest AI systems are premised on an old behaviorist view of intelligence that’s far from encompassing human thought. In practice, the confusion around AI’s capacities serves as a pretext for imposing more metrics upon human endeavors and advancing traditional neoliberal policies. The revived AI, like its predecessors, seeks intelligence with a “view from nowhere” (disregarding race, gender and class)—which can also be used to mask institutional power in visions of AI-based governance. Ultimately, AI’s rebranding showcases how corporate interests can rapidly reconfigure academic fields. It also brings to light how a nebulous technical term (AI) may be exploited for political gain.