Artificial Intelligence

Notes, ideas and critical thoughts around the idea, business and visions of artificial intelligence.
All notes
Entries: 3

These are a couple of notes and prompts I collected while using GPT in a research and writing project.

  1. Setup
  2. Research
    1. Summaries
    2. Simplify
  3. Writing
    1. Generate Headlines
    2. Simplify your writing
    3. Sound more eloquent
    4. Further Questions
  4. The Limits of GPT
    1. Sources
    2. Structure

These are a couple of notes and prompts I collected while using GPT in a research and writing project.


I am currently using GPT mainly through the “text generator”-plugin for Obsidian. It‘s not perfect and the templates are currently (for me) a bit buggy but it works well enough.

One example process I‘ve been using: OCR-ing a page with the phone → pasting the text it into Obsidian → generating a first summary in bullet points through GPT → editing to taste.



Give me a summary

Give me a summary in [number] sentences

This is pretty straightforward but GPT is actually not bad at generating summaries. Copy in the text or paragraph and add one of the above and there you go.

So far I‘ve used this approach to generate a first drafts for executive summaries and short summaries for notes in my Obsidian Vault.

Caution GPT will still make mistakes and most likely miss arguments you might find especially important. Always edit, never copy.


Simplify this text

Explain this text to me as if I am [number] years old

Make the content very clear and easy to understand

You can ask GPT to break down a more complex paragraph from a paper you have trouble understanding. In my experience it works roughly 90% of the time. It‘s definitely not perfect but may help in cutting through dense writing.

Caution GPT may appear smart but isn‘t able to actually understand the writing. Nuance and arguments might get lost in translation.


Generate Headlines

Suggest some headlines

Suggest some headlines in the style of [publication]

Hate writing headlines? Let GPT do some of the heavy-lifiting. Paste the text, ask for headlines and get some siggestions back. Again, edit carefully. You can also modify the output by adding adjectives like snappy, short or scientific.

GPT is also surprisingly good at copying the style of publications be that Bloomberg or the New York Times.

Simplify your writing

Simplify this text

Explain this text to me like I am [number] years old

Make the content very clear and easy to understand

This works in similar fashion as a summary. With the added benefit of GPT attempting to break down more complex explanations. Again, simply paste the text and add one command at the end.

Caution GPT may appear smart but isn‘t able to actually understand the writing. Edit accordingly and treat the model as you would anyone else‘s writing suggestions.

Sound more eloquent

Make me sound more eloquent

This is for the lazy ones. GPT is actually pretty good at rephrasing paragraphs and sentences (or at least give you a second option). Prefect for some light copy-editing or gap-filling.

Copy your text, add the prompt and check the output. You can even drop in bullet-points and let the model fill out the gaps.

Caution GPT may appear smart but isn‘t able to actually understand your writing. Edit accordingly and treat the model as you would anyone else‘s writing suggestions.

Further Questions

What are some interesting questions to continue this disucssion?

This one is more experimental. In short GPT will attempt to generate a number of question based on the provided text. Some might be helpful for helping you further flesh-out your arguments while some might turn out rather boring.

Caution GPT tends to generate rather obvious questions and has a somewhat disturbing prevalence towards “how might wes…”. Consider yourself warned.

The Limits of GPT

Every technology has its limits. Here are some things I found with GPT.


GPT is notoriously bad with sources. Ask for some and it will simply invent citations and references. It‘s not a search or even knowledge engine. Never treat it like one.


Caution GPT won‘t safe you from writing bad arguments or badly structured articles. Yes, you can generate whole articles through the interface but only if you want to arrive at the same SEO-mush as everyone else.


Understanding Automation

  1. Automation doesn’t happen on the job-level, but the task-level.

  2. Automation doesn’t strictly mean human replacement, but also augmenting given tasks.

  3. Automation will always need maintenance, updates, and upgrades. There is not a date X, where the process is finished.

  4. Automation happens, if it‘s economically beneficial, not if it‘s technically feasible. It‘s — of course — the product and producer of capitalism.

Keeping this in mind, almost every new technology can be placed in one of the following quadrants.

Tomatoes, Tomatoes

This section from Jack Stilgoe‘s excellent book How‘s driving innovation? is in my opinion a perfect example against the technological determinism inherent to the automation discourse. Just because technology exist, doesn’t mean it will be adopted. Automation is driven not by technology but by politics and economics.

If we want to understand the politics of today’s and tomorrow’s technologies, we should look back to the technologies that are now regarded as part of society’s inevitable industrialisation and ask who benefitted and why. The philosopher of technology Langdon Winner asks us to consider the tomato. The tomatoes on a twenty-first century supermarket shelf are the way they are because of a set of organisational and technological choices. The technologisation of the tomato was extraordinarily rapid. In 1960, the tomato fields of California contained fruit in a variety of shapes and sizes and were picked by hand; mostly by the hands of tens of thousands of braceros (immigrant Mexican workers). By 1970, almost all of California’s tomatoes were harvested by machines.1

The machine that enabled the industrialisation of tomato farming came from a collaboration between a fruit breeder and an aeronautical engineer at the University of California, Davis, in the 1940s. In one pass, the tomato harvester could cut a row of plants, shake the fruit from their stalks and drop them into a trailer. Humans were required only to drive the machine, maintain it, check the tomatoes and throw out any dirt, stalks or small animals that ended up in the trailer. After early attempts to get the fruit to survive the journey from field to trailer intact, the researchers realised that, for the tomato harvester to work as intended, the tomato itself had to be tougher and less tasty—good for ketchup and processed food; bad for salads. Fields had to be rectangular, flat and well-irrigated. Farmers had to learn how to, in the words of one of the engineers, ‘grow for the machine’.2 Each device was expensive but, if a farm was big enough to afford one, it could dramatically cut costs.


The tomato machines were available for a number of years before they were deployed widely. They only became popular once policies were introduced to expel cheap immigrant labour. This allowed US farm workers to earn more, but increased farmers’ incentives to automate and turn their fields over to tomatoes.

  1. The economic history of the tomato harvester is explained by Clemens et al.(2018). 

  2. Quoted in The Tomato Harvester, Boom California, vester/ 


A Critical Reading List

  1. How does AI/ML/DL work?
  2. Limitations and challenges of AI
  3. Better reporting
  4. AI & Jobs
  5. Criticism

How does AI/ML/DL work?

Course: Artificial Intelligence (Fall 2015) Prof. Patrick Henry Winston. MIT.

A course including 12 video lectures (about 50 minutes each) on the technical details of AI. I especially found the two lectures on neural nets helpful, which give a great introduction on the topic.

Jason’s Machine Learning 101 (December, 2017) A great an in depth overview over AI, machine learning and deep learning and how all those things actually work, as well as some notes on limitations and challenges.

Limitations and challenges of AI

A Berkeley View of Systems Challenges for AI (December 15, 2017) Ion Stoica, Dawn Song, Raluca Ada Popa, David Pa erson, Michael W. Mahoney, Randy Katz, Anthony D. Joseph, Michael Jordan, Joseph M. Hellerstein, Joseph Gonzalez, Ken Goldberg, Ali Ghodsi, David Culler, Pieter Abbeel. Berkley EECS.

Deep Learning: A Critical Appraisal (December 18., 2017). Marcus, Gary. New York University.

Gary Marcus’ view on deep learning is more critical, than the Berkeley overview. He argues, that deep learning wont be enough to ever reach a general artificial intelligence. There are definite limits to the technique.

Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.

A summary can be found in this Wired Article Wise up, deep learning may never create a general purpose AI (January 20., 2018) by Greg Williams.

Innateness, AlphaZero, and Artificial Intelligence. (n.a.). Marcus, Gary. New York University

In this paper, I consider as a test case a recent series of papers by Silver et al (Silver et al., 2017a) on AlphaGo and its successors that have been presented as an argument that a “even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance”, “starting tabula rasa.” I argue that these claims are overstated, for multiple reasons. I close by arguing that artificial intelligence needs greater attention to innateness, and I point to some proposals about what that innateness might look like.

Greedy, Brittle, Opaque, and Shallow: The Downsides to Deep Learning (February 02., 2018). Pontin, Jason. Wired.

A Wired article, which bundles most of the criticism and known limits of DL/ML.

We’ve been promised a revolution in how and why nearly everything happens. But the limits of modern artificial intelligence are closer than we think.

Better reporting

Troubling Trends in Machine Learning Scholarship(July 10., 2018). Zachary C. Lipton & Jacob Steinhardt. Approximately Correct.

Litpon and Steinhardt examined rhetorical and narrative patterns in machine learning papers, published by scientists. While all are fairly harmless, they nethertheless can irritate a casual reader and lead to a missrepresentation in articles.

Flawed scholarship threatens to mislead the public and stymie future research by compromising ML’s intellectual foundations. Indeed, many of these problems have recurred cyclically throughout the history of artificial intelligence and, more broadly, in scientific research. (…) By promoting clear scientific thinking and communication, we can sustain the trust and investment currently enjoyed by our community.

AI & Jobs

What can machine learning do? Workforce implications (December 22, 2017). Erik Brynjolfsson, Tom Mitchell. Science.

A great article on the characteristics a task needs to have, to be automated by a machine learning system.

Although recent advances in the capabilities of ML systems are impressive, they are not equally suitable for all tasks. The cur- rent wave of successes draw particularly heavily on a paradigm known as supervised learning, typically using DNNs. They can be immensely powerful in domains that are well suited for such use. However, their competence is also dramatically narrower and more fragile than human decision-making, and there are many tasks for which this approach is completely ineffective


The Wired Brain: How not to talk about an AI-powered Future (March 9., 2017). Montanani, Ines.

Some good thoughts about the representation of AI in the media and ideas on how to better report on AI.

The new opportunities emerging from Machine Learning are very tempting. The wildest scenarios from decades of futuristic science fiction are suddenly (almost) possible. But does that mean they’re actually the most practical things to build? The problem here lies in how those fictional ideas were conceived. The way people imagine technology of the future is heavily biased by their current experiences and expectations.

The Smart, the Stupid, and the Catastrophically Scary: An Interview with an Anonymous Data Scientist (November 2016). n.A.. Logic Magazine, Issue 01: Intelligence.

An Interview with an anonymous data scientists on the relationship between big data and machine learning, the artifical marketing hype and limits of the technology. Highly recommended reading.

Situating Methods in the Magic of Big Data and Artificial Intelligence (September 20, 2017). Elish, M. C. and Boyd, Danah. Communication Monographs.

Elish and Boyd explore the myth around Big Data and AI as “magical” technologies. The article is worth reading for an insight in the business of AI and Big Data, as well as a critical perspecitve on the real possibilities and challenges of both technologies. A shorter versiom is available on Medium.

The uncritical embrace of AI technologies has troubling implications for established forms of accountability, and for the protection of our most vulnerable populations. AI is increasingly being positioned as the answer to every question, in part because AI seems to promise not only efficiency and insight, but also neutrality and fairness — ideals that are often viewed as impossible to achieve through individual human or organizational decision-making processes. The fantasies and promises of AI often obscure the limitations of the field and the complicated trade-offs of technical work done under the rubric of “AI.”

Imagining the thinking machine: Technological myths and the rise of artificial intelligence (June 20, 2017). Simone Natale, Andrea Ballatore. Convergence.

Similar to Elish and Boyd explore Natale and Ballatore three narratives and myths around AI and the consequences of those. Focussed more on the role media plays in forming the public imaginations around “the thinking machine”, this paper is worth a look for media professionals.

Based on a content analysis of articles on AI published in two magazines, the Scientific American and the New Scientist, which were aimed at a broad readership of scientists, engineers and technologists, three dominant patterns in the construction of the AI myth are identified: (1) the recurrence of analogies and discursive shifts, by which ideas and concepts from other fields were employed to describe the functioning of AI technologies; (2) a rhetorical use of the future, imagining that present shortcomings and limitations will shortly be overcome and (3) the relevance of controversies around the claims of AI, which we argue should be considered as an integral part of the discourse surrounding the AI myth.

Manufacturing an Artificial Intelligence Revolution (November 27, 2017). Katz, Yarden.

Definitely one of the most critical views on AI I’ve read so far. I am not sure, if I agree with every point Katz raises, but it‘s a perspective worth considering.

I argue here that the “AI” label has been rebranded to promote a contested vision of world governance through big data. Major tech companies have played a key role in the rebranding, partly by hiring academics that work on big data (which has been effectively relabeled “AI”) and helping to create the sense that super-human AI is imminent. However, I argue that the latest AI systems are premised on an old behaviorist view of intelligence that’s far from encompassing human thought. In practice, the confusion around AI’s capacities serves as a pretext for imposing more metrics upon human endeavors and advancing traditional neoliberal policies. The revived AI, like its predecessors, seeks intelligence with a “view from nowhere” (disregarding race, gender and class)—which can also be used to mask institutional power in visions of AI-based governance. Ultimately, AI’s rebranding showcases how corporate interests can rapidly reconfigure academic fields. It also brings to light how a nebulous technical term (AI) may be exploited for political gain.