Discover more from weishaupt.ai
GPT-4 Problems of Discernment, Too Sleezy to Build AGI, Can ChatGPT "Think"? and How to Access GPT-4
AI Bytes: Volume CI, Issue #14
Get smarter faster with AI Bytes: 3 articles, 2 podcasts, and 1 video.
Was this email forwarded to you? Sign up here.
Large language models like GPT-4 have a shortcoming that has received little attention: their shoddy recall. Each time a model generates a response, it can take into account only a limited amount of text, known as the model’s context window. GPT-4 has a context window of roughly 8,000 words, but it still can't retain information from one session to the next. The problem is not really a problem of memory but one of discernment. Large language models have no capacity for triage, no ability to distinguish garbage from gold.
This article discusses the increasing use of artificial intelligence in workplace decision-making, particularly in hiring and job-related decisions. AI has shown promise, but potential flaws may require the need for oversight and regulation. Ultimately, it’s the right mix of AI and human intelligence can lead to better workplace decisions.
OpenAI, a company with ambitions to create Artificial General Intelligence (AGI), has been criticized for being for-profit, closed-source, and in partnership with Microsoft. The company was originally founded as a non-profit, open-source research company, but has since shifted its focus to generating revenue. Critics argue that this shift has compromised the company's founding ideals of transparency, openness, and collaboration, and that the company's current iteration is untrustworthy.
Guy Raz interviews Sam Altman, a leader in AI development and co-founder of the nonprofit OpenAI, who shares his journey from Stanford dropout to president of Y Combinator. Altman also discusses his hopes and fears for the future of AI and how his company is working to ensure it benefits humanity.
Anna Ivanova, a postdoctoral researcher at MIT Quest for Intelligence, discusses her recent paper on large language models (LLMs) and their capabilities. She reviews the performance of LLMs on two aspects of language use: formal linguistic competence and functional linguistic competence. The conversation explores parallels between linguistic competence and AGI, the need for new benchmarks, and whether LLMs can address various aspects of functional competence.
Building a better world with Artificial Intelligence. Get smarter faster with AI Bytes. Each issue features 3 articles, 2 podcasts, and 1 video delivered to your inbox once a week.
AI Tool Tracker
I’m sharing a Notion page with the various AI tools I’ve been trying out. You can access it here.