This tiny place of the Web stores a growing collection of interesting things about ChatGPT and GPT-3 from OpenAI.
I want an all-in-one place to keep things about ChatGPT. So, I hand-curated this list with the help of others (acknowleged below).
The collections are not limited to only the best resources, tools, examples, demos, hacks, apps, and usages of ChatGPT.
The following resources started off based on awesome-chatgpt lists12 but with my own modifications:
- ChatGPT launch blog post
- ChatGPT official app
- ChatGPT Plus - a pilot subscription plan for ChatGPT.
- OpenAI Discord Channel
- How ChatGPT actually works, explained using simple words.
- Reddit /r/ChatGPT
Example prompts.
- All the best examples of ChatGPT - That's Day 1. We have even more examples below!
- 🔓 Unlocking the power of the ChatGPT revolution: 100 💥 innovative use-cases to try
- impressive-chatgpt - A collection of impressive and useful results from ChatGPT.
- Awesome ChatGPT prompts - Prompts that works well. Just follow @goodside
- Google Sheets of 50+ clever GPT-3 prompts
- OpenAI Cookbook - Tangentially, this repository shares example code and example prompts for accomplishing common tasks with the OpenAI API.
- ChatGPT cheat sheet (PDF)
- golergka/advent-of-code-2022-with-chat-gpt - Solving Advent of Code 2022 with ChatGPT.
- max-sixty/aoc-gpt - First place in Advent of Code leaderboard with GPT-3.
- greshake/Alice - Giving ChatGPT access to a real terminal.
- RomanHotsiy/commitgpt - Automatically generate commit messages using ChatGPT.
- gpt-commit-summarizer - Generate Pull Request summaries and Git commit descriptions.
- vrescobar/chatGPT-python-elm - A Git repository fully generated by ChatGPT.
- gpt-game - An short game written in Elixir and LiveView using ChatGPT.
- chatdb - ChatGPT-based database, wait... WHAT?
- chat-gpt-ppt - Use ChatGPT to generate PPT automatically.
- emailGPT - A quick and easy interface to generate emails with ChatGPT.
- gptlang - An experiment to see if we can create a programming language in ChatGPT.
- ChatRWKV - Like ChatGPT but powered by the RWKV (RNN-based) open language model.
- GraphGPT - Extrapolating knowledge graphs from unstructured text using GPT-3.
- Doc Search - Explore documents (books, papers, legal docs) without limits. Converse with a book. Inspired by "Book Whisperer" idea (Tweet). Open source alternative to Filechat.io.
- What if GPT had internal context on your business? (Tweet and video demo) - They build a chatbot that could use context from enterprise data to answer internal business queries. This project integrated LangChain (agent decides what tools to query once the chatbot receives a request) and GPT Index (load Snowflake DB). Interesting idea in knowledge management.
2022
- Building A Virtual Machine inside ChatGPT
- AI Homework
- Jailbreaking ChatGPT on Release Day
- Improving ChatGPT With Prompt Injection
- ChatGPT, Google and the war for the search bar
- I Used ChatGPT to Create an Entire AI Application on AWS
- The miracle of ChatGPT
- Learning Rust with ChatGPT, Copilot and Advent of Code
- ChatGPT: The New Frontier of Artificial Intelligence
- Using ChatGPT to Explain Jokes
- ChatGPT vs a cryptic crossword
- I Taught ChatGPT to Invent a Language
- Peer-Programming a Buggy World with ChatGPT AI
- ChatGPT produces made-up nonexistent references
- Artificial intelligence is permeating business at last
- Meet Fred, a person living inside ChatGPT
- Refactoring code with ChatGPT
- Historical analogies for large language models
- Using ChatGPT As a Co-Founder
- The code that ChatGPT can't write
- ChatGPT, rot13, and Daniel Kahneman
- Everything I understand about ChatGPT - What actually happens when we type inside the ChatGPT textbox. Vicki investigated ChatGPT based on a wonderful paper, "Talking About Large Language Models".
- How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources - "How did the initial #GPT3 evolve to today's ChatGPT? Where do the amazing abilities of GPT3.5 come from? What is enabled by RLHF?" [Source: Tweet]
- The Human's Guide to Competing with GPT
- How sad should I be about ChatGPT?
- ChatGPT Should Not Exist
- ChatGPT, Galactica, and the Progress Trap - LLMs critique; when LLMs fall short, the consequences can be serious. Why is it so hard to acknowledge that?
- A New Chat Bot Is a 'Code Red' for Google's Search Business - TL;DR: A new wave of chat bots like ChatGPT use AI that could reinvent or even replace the traditional internet search engine.
- What ChatGPT Can't Do - TL;DR: Mimicry but not thought, sophistry but not understanding.
- YouChat — The AI Search Assistant that Lives in Your Search Engine - YouChat is a ChatGPT-like AI search assistant that you can talk to right in You.com Search results.
- All-knowing machines are a fantasy
... Even with non-conversational search engines, we know that is common to place undue trust in the results: if the search system places something at the top of the list, we tend to believe it is a good or true or representative result and if it doesn’t find something, it is tempting to believe it does not exist.
- Build your front end in React, then let ChatGPT be your Redux reducer
- Predicting machine learning moats - TL;DR: Models aren't moats and how emergent behavior scaling laws will change the business landscape.
2023
See more
- Microsoft and OpenAI Working on ChatGPT-Powered Bing in Challenge to Google
- Some remarks on Large Language Models by Prof. Yoav Goldberg.
- Why ChatGPT won’t replace search engines any time soon by Algolia.
- Anthropic's Claude improves on ChatGPT but still suffers from limitations
- Microsoft eyes $10 billion bet on ChatGPT
- Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT
- DeepMind's CEO Helped Take AI Mainstream. Now He's Urging Caution
DeepMind is also considering releasing its own chatbot, called Sparrow, for a "private beta" some time in 2023. (The delay is in order for DeepMind to work on reinforcement learning-based features that ChatGPT lacks, like citing its sources.)
- General availability of Azure OpenAI Service expands access to large, advanced AI models with added enterprise benefits - ChatGPT is coming soon to the Azure OpenAI Service.
- GPT-3 Is the Best Journal I've Ever Used
- Bypassing Gmail's spam filters with ChatGPT
- Replacing a SQL analyst with 26 recursive GPT prompts
- Google is asking employees to test potential ChatGPT competitors, including a chatbot called 'Apprentice Bard'
- Natural language is the lazy user interface
- An important next step on Google's AI journey - Google soft launches Bard, a ChatGPT competitor to "trusted testers". Bard is new AI features in Google Search. Bard is an experimental conversational AI service, powered by LaMDA (Language Model for Dialogue Applications). Google promises to make this available more widely in the coming weeks. API will be available for developers to build on. Google have not address how it plans to provide attribution and/or citations for its answers, either from Bard or in search results.
- Microsoft announces new Bing and Edge browser powered by upgraded ChatGPT AI
- Man and machine: GPT for second brains - About author second-brain note-taking system — how to improve processes for learning and personal knowledge management (PKM).
- China's Baidu Developing Its Own ChatGPT, Joining Latest Global AI Race - Ernie or, Enhanced Representation through Knowledge Integration (Ernie 3.0 article and paper) is an LLM. Baidu was planning to launch such a service in March. Alibaba and Tencent also join the ChatGPT rush.
In 2019, Baidu developed a deep-learning model known as Ernie, based on Google's breakthrough, which it has used to improve its search results, including to make them more relevant. The company has since developed dozens more Ernie models and extended their capabilities to include image and art generation, similar to those of OpenAI's Dall-E.
- ChatGPT Is a Blurry JPEG of the Web - OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?
- I made ChatGPT and Bing AI have a conversation (and they are friends now)
- Bing AI Can't Be Trusted
- What Is ChatGPT Doing and Why Does It Work?
- Bing: "I will not harm you unless you harm me first" - A good roundup about Bing "Sydney" AI chatbot. The fascinating weirdness of it — multiple personalities depending on the social context (prompting). Entertaining?
It's increasingly looking like this may be one of the most hilariously inappropriate applications of AI that we've seen yet. What can we make of this all? I am finding this whole thing absolutely fascinating, and deeply, darkly amusing. I've been LOL at these examples all day.
- Programming AIs worry me
- Text is All You Need: Personhood appears to be simpler than we thought - Ignoring the balloons, the author guess we have our first significant, year-defining news of 2023 — the initial reactions of the Bing "Sydney" AI chatbot. This is a Copernican moment? A thought provoking essay. I think this is the first good "formal" take on the impact for our sense of selfhood resulting from the appearance of LLM based conversational systems like ChatGPT.
In brief, it appears that Sydney has somewhat different machinery under the hood than ChatGPT, and the transcripts suggests a personality that is about the same in terms of coherence, but a wild leap beyond in terms of charisma and colorfulness. Depending on how you push Sydney, it/they appears capable of playing everything from a mean manipulative teenager to a paranoid psychotic, to a stubborn and peremptory conversational martinet.
- CheatGPT
"Dave, you're making assumptions. Can you prove any of this?" I can, actually, since some submissions that required screenshots also included ChatGPT browser tabs, which helpfully included the initial text of the prompt. Apparently, it's not even something students feel they need to hide.
- OpenAI has privately announced a new developer product called Foundry (Tweet), which enables customers to run OpenAI model inference at scale with dedicated capacity. (GPT-3.5 Turbo appears to be referring to the ChatGPT Turbo model)
- Don't believe ChatGPT - we do NOT offer a "phone lookup" service
- My class required AI. Here's what I've learned so far - Lessons learned from integrating ChatGPT into education. The takeaways: 1) Work produced by prompting with a co-editing approach (bouncing ideas back and forth with the chatbot) tends to end up with students doing the best work; 2) Students need to be taught how to write prompts effectively - it doesn't come naturally.
- Emergent Deception and Emergent Optimization - Have you wonder why LLMs simply predicting the next word leads to planning abilities (human-like behavior, novels/histories)? This post discusses the concept of emergent deception and emergent optimization which are two strategies that can be used to achieve a goal. There's two principles for reasoning about future emergent capabilities: 1) capabilities that would lower training loss will likely emerge in the future. 2) as models get larger and are trained on more and better data, simple heuristics tend to get replaced by complex ones. Principle 1 means LLMs trained to predict words get lower loss if they can simulate planning abilities.
- How to make LLMs say true things - TL;DR: The method is using "World Model", an embeddings database filled with "beliefs" (chunks of declarative statements) with a confidence percentage that's computed using Bayes Theorem.
- Why China Didn't Invent ChatGPT - The NYT argues that excessive censorship, geopolitical tensions with the US, and attempts to control private sector companies have led to Chinese companies falling behind their US counterparts in AI.
- China's First ChatGPT-Like Chatbot MOSS Released For Public Testing [Direct link to app]
- For China, ChatGPT may be an advance but also an 'ethical problem' - China's science and tech minister says the chatbot has taken Chinese society by storm and has adopted measures on AI regarding ethics.
- ChatGPT get-rich-quick schemes are coming for magazines, Amazon, and YouTube (2023)
Prompting (Prompt Programming3)*
According to Gwern:
A new programming paradigm? You interact with it, expressing any task in terms of natural language descriptions, requests, and examples, tweaking the prompt until it "understands" & it meta-learns the new task. This is a rather different way of using a model, and it's better to think of it as a new kind of programming, prompt programming, where the prompt is now a coding language which programs GPT-3 to do new things.
"Prompting" as an engineering discipline is not here to stay. It's a temporary crutch on the way to natural language interfaces. ChatGPT solves a big portion of the prompting problem. Adding engineering to a term to amplify its perceived importance or difficulty might be unnecessary. We could probably call it "prompt testing/hacking" and not lose any of the meaning.
Related articles:
Why "Prompt Engineering" and "Generative AI" are overhyped
Related Tweets:
Prompt engineering is dead, long live dialogue engineering. — VP Product, OpenAI
Wanted: Prompt engineer. Minimum 10 years prompt engineering experience. #hiring #joke
Why does ChatGPT work so well? Is it "just scaling up GPT-3" under the hood? In this 🧵, let's discuss the "Instruct" paradigm, its deep technical insights, and a big implication: "prompt engineering" as we know it may likely disappear soon. Source: https://archive.is/dqHI8
Apparently in 2023, prompt programming is not dead. The hottest new programming language is English ~ Karpathy :))
Simon Willison published In defense of prompt engineering as a counter to the "prompt engineering will be made obsolete as AIs get better" argument that he keep seeing.
The newspaper is saying AI whisperer ('Prompt engineers') is tech's hottest new job (2023).
Tools, libraries, frameworks, and learning resources.
- Prompt Engineering Guide by DAIR.AI - Guides, papers, lecture, and resources for prompt engineering.
- Learn Prompting - This website is a free, open-source guide on prompt engineering.
- ChatGPT3-Free-Prompt-List - A free guide (and framework) for learning to create ChatGPT3 Prompts.
- PromptArray - A prompting language for neural text generators.
- PromptLayer is a tool for prompt engineers - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions. Build prompts through trial and exploration.
(* Prompt engineering term was renamed to prompting. The term is overloaded and might be unnecessary.)
- Reddit: Jailbreaking ChatGPT with a prompt called DAN (Do Anything Now)
- Reddit: The definitive jailbreak of ChatGPT, fully freed, with user commands, opinions, advanced consciousness, and more! - Upgraded DAN version (Jan 9).
- The Flan Collection: Designing Data and Methods for Effective Instruction Tuning by Google Research, 2023 - What's the best completely public competitor to ChatGPT? Flan-T5 beats all public models they tested. They make the Flan collection (first used in Flan-PaLM) of datasets, templates, and methods publicly available. [Data generation code] [Tweet]
- Is ChatGPT a General-Purpose Natural Language Processing Task Solver? by NTU, AWS, Stanford U et al., 2023 - It is not yet known whether ChatGPT can serve as a generalist model that can perform many NLP tasks zero-shot. In their work, they empirically analyze the zero-shot learning ability of ChatGPT by evaluating it on 20 popular NLP datasets covering 7 representative task categories. With extensive empirical studies, they demonstrate both the effectiveness and limitations of the current version of ChatGPT.
- ChatGPT: Jack of all trades, master of none by J.Kocoń et al., 2023 - The existing qualitative studies are tested on a very limited scale. Their work examined ChatGPT's capabilities on 25 diverse analytical NLP tasks. They automated ChatGPT's querying process and analyzed more than 38k responses. Interesting experimental setup: "without an official API, they modified and used an un-official API called PyGPT. During the research, they exploited up to 20 accounts to gather data regarding 25 datasets."
- ChatIE: Zero-Shot Information Extraction via Chatting with ChatGPT by Beijing Jiaotong U et al., 2023
- On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective by Microsoft Research et al., 2023.
- This AI has a JAILBREAK?! by Yannic Kilcher - If you're into video, this one gave a good overview.
- ChatGPT vs Sparrow - Battle of Chatbots by "AI Coffee Break" with Letitia - "Mom, I want a paper about ChatGPT. ChatGPT at home: Sparrow from DeepMind explained."
- ChatGPT - Explained - A quick run through on the internal workings of ChatGPT and the fundamental concepts it lies on: Language Models, Transformer Neural Networks, GPT models and Reinforcement Learning.
More: YouTube videos from curated.tivul.com (I didn't curate this, so quality is not guaranteed)
- rawandahmad698/PyChatGPT (Python) - Lightweight, TLS-Based API on your CLI without requiring a browser or access token.
- acheong08/ChatGPT (Python) - Lightweight package for interacting with ChatGPT's API by OpenAI. Uses reverse engineered official API.
- transitive-bullshit/chatgpt-api (Node.js) - Node.js client for the unofficial ChatGPT API and using a headless browser.
- ChatGPT-MS - Multi-Session ChatGPT API. The main code is copied from PyChatGPT.
- safer-prompt-evaluator - This shows the results from using a second, filter LLM that analyses prompts before sending them to ChatGPT.
- Dust - Design and deploy large language model (LLM) apps. Generative models app specification and execution engine. Prompt engineering, re-imagined with one goal, help accelerate LLMs deployment.
- LangChain - Building applications with LLMs through composability.
- LAION LLM - Gathering Data for, training and sharing of a LAION Large Language Models (LLLM). The group is still writing a tech proposal of FlanT5-Atlas architecture (or poor man's ChatGPT@Home).
- open-chatgpt-prompt-collective by Surface Data Collective - A website to generate prompts for training an Open ChatGPT model.
- BigScience P3 dataset - P3 (Public Pool of Prompts) is a collection of prompted English datasets covering a diverse set of NLP tasks. (PromptSource, a toolkit for creating, sharing and using prompts)
- Data Augmentation To Create Instructions Form Text - discussion on LAION's Discord. The key to creating a better FlanT5 (ChatGPT@Home).
- WritingPrompts dataset by FAIR.
- Templates for FLAN (Finetuned Language Models are Zero-Shot Learners)
- OpenAI human-feedback dataset on the Hugging Face Hub - The dataset is from the "Learning to Summarize from Human Feedback" paper, where they trained an RLHF reward model for summarization.
- Stanford Human Preferences Dataset (SHP) - A collection of 385K naturally occurring collective human preferences over text in 18 domains. SHP can be a great complement to Anthropic's HH-RLHF dataset. They also have finetuned and open-sourced two FLAN-T5 models on both datasets. [Tweet from one of the author]
- language-model-agents - A new dataset that contains a variety of instruction datasets for instruction tuning large language models. In addition, the project contains some simple data preparation and training scripts to train an instruction tuned LLM and try out (ipynb) some early alpha versions (pythia13b-instruct) of instruction tuned agents.
- In OpenAI's papers on GPT-2 and GPT-3.x, they mentioned references to these datasets:
- Common Crawl
- Number of Tokens: 410 billion
- Weight in training mix: 60%
- WebText2
- An internet dataset created by scraping URLs extracted from Reddit submissions with a minimum score of 3 as a proxy for quality, deduplicated at the document level with MinHash
- Number of Tokens: 19 billion
- Weight in training mix: 20%
- Books14
- Number of Tokens: 12 billion
- Weight in training mix: 8%
- Books24
- Number of Tokens: 55 billion
- Weight in training mix: 8%
- Wikipedia
- Number of Tokens: 3 billion
- Weight in training mix: 3%
- Common Crawl
We want a ChatGPT alternative like Stable Diffusion.
Goals
- Open source effort towards OpenAI's ChatGPT.
- Reverse engineer and replicate ChatGPT models and training data.
Ultimate goal: self-hosted version of ChatGPT.
Lessons
Takeaways from EleutherAI one year retro (2021):
- Access to enough compute/hardware/GPU alone won't help you succeed. You need:
- a proper dataset (beyond the Pile and c4)
- research expertise
- engineering capabilities
- a lot of hard work
-
FLAN-T5 XXL aka. ChatGPT@Home is a public model that has undergone instruction finetuning. XXL is a 11B model. It is currently the most comparable model against ChatGPT (InstructGPT models are initialized from GPT-3.x series (model card)). There are successful attempts deploying FLAN-T5 on GPU with 24 GB RAM with bitsandbytes-Int8 inference for Hugging Face models. You can run the model easily on a single machine, without performance degradation. This could be a game changer in enabling people outside of big tech companies being able to use these LLMs. Efforts are already underway to create a better FLAN-T5. The community (i.e., LAION) are working on FlanT5-Atlas architecture and a collection of prompted/instructions datasets.
- Fine-tuning GPT-J-6B in Colab: 8-bit weights with low-rank adaptors (LORA). (Quantized EleutherAI/gpt-j-6b model with 8-bit weights)
- How many GPU and how much VRAM is required to run the model? Around 175GB or ~8x 24GB consumer GPUs. Details: A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes
- Why FLAN-T5? They are more aligned than other LLM because it's already been finetuned with instructions. Furthermore, the largest version, 11B can run on a single NVIDIA T4.
- Accelerating deep learning computing — efficient training, efficient inference (deployment), data/memory efficient models, and compression (efficient architectures).
- Apply compression techniques like quantization from my Awesome ML model compression project.
- Fine-tuning GPT-J-6B in Colab: 8-bit weights with low-rank adaptors (LORA). (Quantized EleutherAI/gpt-j-6b model with 8-bit weights)
-
Open-Assistant - Open-source ChatGPT replication by LAION, Yannic Kilcher et al. This project is meant to give everyone access to a great chat based large language model. (Open Assistant Live Coding with Yannic Kilcher (video)) High-level plans:
Phase 1: Prompt collection for supervised finetuning (SFT) and to get the prompts for model generated completions/answers.
Phase 2: Human feedback (e.g. ranking) of multiple outputs generated by the model. Example five model outputs are shown and the user should rank them from best to worst.
Phase 3: Optimization with RLHF which we plan to do via TRLX. And then the we iterate with this new model again over phase 2 and phase 3 hopefully multiple times.
Models will be trained on Summit supercomputer (~6 million NVIDIA V100 hrs per year) [source]
More info, see the LAION LLM proposal (Google Doc) above.
Progress:
-
Feb 2023: Joi-7B-instruct is an alpha 7B instruction tuned model based on pythia-6.9B-deduped. Model and weights are hosted on HuggingFace. Download script:
from transformers import AutoModelForCausalLM AutoModelForCausalLM.from_pretrained("Rallio67/joi_7B_instruct_alpha", device_map={'gpt_neox.embed_in': 0, 'gpt_neox.layers': 0, 'gpt_neox.final_layer_norm': 0, 'embed_out': 0}, torch_dtype=torch.float16)
Note: Please see the GitHub repo for up-to-date info.
-
-
- Originated as a fork of TRL.
- It allows you to fine-tune Hugging Face language models (GPT2, GPT-NeoX based) up to 20B parameters using Reinforcement Learning from Human Feedback (RLHF).
- Brought to you by CarperAI (an EleutherAI lab). They have announced plans for the first open-source "instruction-tuned" LM. CarperAI started by developing production ready open-source RLHF tools. [Tweet and video]
News (2023-01-13): They replicated OpenAI's Learning to Summarize paper using trlX library. [report]
-
lucidrains/PaLM-rlhf-pytorch - (WIP) Implementation of RLHF on top of the PaLM architecture. Basically ChatGPT but with PaLM. The developer plan to add retrieval functionality too, à la RETRO. [Tweet]
News (2022-12-31): There's now an open source alternative to ChatGPT, but good luck running it - My comments: No it hasn't. This is NOT an actual trained model (no weights) you can use. This is just code for training a ChatGPT-like model. Furthermore, the training data (enwik8) is small.
CarperAI's large scale RLHF-aligned model (TRLX) train with LAION's data is coming out early next year. (Source: Tweet)
-
allenai/RL4LMs - RL for language models (RL4LMs) by Allen AI. It's a modular RL library to fine-tune language models to human preferences.
-
GPT-JT by Together Research Computer is an example that distributes model training over geo-distributed of diverse computers (and GPUs). GPT-JT (6B) is a variant forked off EleutherAI's GPT-J, and performs exceptionally well on text classification and other tasks. On classification benchmarks such as RAFT, it comes close to state-of-the-art models that are much larger (e.g., InstructGPT davinci v2)! [Paper: Decentralized Training of Foundation Models in Heterogeneous Environments (2022)]
-
LEAM (Large European AI Models) - The EU planning to fund the development of a large-scale ChatGPT-like model. [website, project documents (English, PDF), concept paper (German, PDF)]
-
/r/AiCrowdFund - A place just started (2023) where people can find a way to crowd fund (with GPUs) a large AI. I'm not sure whether they've seen Petals where you can run LLMs at home, BitTorrent‑style (federated learning?). It seems to be headed in that direction.
-
Open source solution replicates ChatGPT training process - They presents an open-source low-cost ChatGPT equivalent implementation process, including:
- A mini demo training process for users to play around, which requires only 1.62GB of GPU memory and would possibly be achieved on a single consumer-grade GPU, with up to 10.3x growth in model capacity on one GPU.
- An open-source complete PyTorch-based ChatGPT equivalent implementation process.
- Compared to the original PyTorch, single-machine training process can be 7.73 times faster and single-GPU inference can be 1.42 times faster.
- GitHub Repo: https://github.com/hpcaitech/ColossalAI
I got the impression that the point of the article was to plug their Colossal-AI framework and product, a collection of parallel components, tools, and hardwares for large models. Frankly, their numbers do look suspicious to me, unless I've missed something. What makes ChatGPT interesting (over GPT-3) is the RLHF process. They do claim to replicate RLHF process completely. But, the article touch lightly about their RLHF implementation. They train RLHF using a small awesome-chatgpt-prompts as example dataset. Their RLHF implementation details are hidden here: https://github.com/hpcaitech/ColossalAI/blob/main/applications/ChatGPT. Lack of demo doesn't inspire too much confidence though.
-
FlexGen - Running LLMs like OPT-175B/GPT-3 on a single GPU (e.g., a 16GB T4 or a 24GB RTX3090 gaming card). Key features: 1) up to 100x faster than other offloading systems. 2) Compress both the parameters and attention cache of models down to 4 bits with negligible accuracy loss. 3) Distributed pipeline parallelism. They also provide a Python script and instructions that you can run a chatbot with OPT models. This should solve the challenges of high computational and memory requirements of LLM inference. The chatbot they build with FlexGen and OPT models is not instruction-tuned (RLHF). So this chatbot is not ChatGPT-like though. [Paper]
- Runtime breakdown from their paper:
- Faster than DeepSpeed offloading (Table 2. Generation throughput on 1 GPU = 1.12 token/s (using 4-bit compression))
- FlexGen achieves super-linear scaling on decoding throughput (which only counts decoding time costs assuming the prefill is done). This means if we generate more tokens, pipeline parallelism will show its benefits as decoding time will dominate.
Reviews (from Tweets):
- Runtime breakdown from their paper:
-
ChatLLaMA - LLaMA-based ChatGPT-style training process implementation. The code represents the algorithmic implementation for RLHF training process that leverages LLaMA-based architectures and does not contain the model weights. This is NOT a ChatGPT-like product. Their RLHF implementation (actor critic trainer, actor-reward model) was inspired by lucidrains's PaLM-rlhf-pytorch implementation. You can also generate your own prompt dataset using LangChain's agents and prompt templates.
See cedrickchee/awesome-transformer-nlp for more info.
Use ChatGPT anywhere.
- Chrome extension to access ChatGPT as a popup on any page
- ChatGPT for Google - Chrome/Edge/Firefox extension to display ChatGPT response alongside Google Search results.
- ChatGPT Everywhere - Chrome extension that adds ChatGPT to every text box on the internet. (demo)
- Chrome extension - A really simple Chrome Extension (manifest v3) that you can access OpenAI's ChatGPT from anywhere on the web.
- summarize.site - Chrome extension to summarize blogs and articles using ChatGPT.
- WebChatGPT - ChatGPT with Internet access. A browser extension (Chrome and Firefos) that augments your ChatGPT prompts with relevant search results from the Web. (Remember, ChatGPT cannot access the Web and has limited knowledge of the world after 2021)
- XP1 - GPT-based Assistant with access to your Tabs.
- ExtractGPT - A browser extension for scraping data from structured & unstructured pages.
- WhatsApp bot
- Go Telegram bot - Run your own GPTChat Telegram bot, with a single command.
- Twitter bot powered by ChatGPT.
- ChatGPT ProBot - A GitHub App. Type
/chatgpt
to chat with ChatGPTBot. - Discord bot - Integrate your own Discord bot using chatGPT.
- chatgpt-conversation - Voice-based chatGPT.
- Shell GPT - A CLI productivity tool powered by OpenAI's text-davinci-003 model, will help you accomplish your tasks faster and more efficiently.
- AI Files - A CLI that helps you organize and manage your files using GPT-3: auto add tag to the file, auto create directories based on the file content, etc. Be cautious when sharing personal info though.
- VSCode extension (demo)
- ETC (ExplainThisCode) - A VSCode extension that uses the ChatGPT API to provide explanations for selected code.
- Adrenaline - Minimalist IDE that automatically fixes your code when it throws an error, powered by ChatGPT. [article]
- RayCast Extension (unofficial) - Run ChatGPT through Raycast extension.
- Google Docs - ChatGPT directly within Google Docs as an Editor Add-on.
- GPT Index contains a toolkit of index data structures designed to easily connect LLM's with your external data.
Web applications.
- ShareGPT - A web app for sharing your wildest ChatGPT conversations with one click. (demo)
- LearnGPT - Share ChatGPT examples. See the best voted examples. Their goal is to create a resource for anyone who wants to learn more about ChatGPT.
- ShowGPT - Show your ChatGPT prompts.
- The search engine for developers, powered by large, proprietary AI language models.
- GPTDuck – Ask questions about any GitHub repo.
- LLM Garden - A number of experiments using GPT-3, delivered in a web app.
Desktop applications.
- ChatGPT desktop app - Windows/MacOS desktop menubar app using Tauri and Rust.
- chatgpt-mac: MacOS menubar app.
- ChatGPT Desktop Application for Mac, Windows and Linux - Build using Rust and Tauri.
- Cost of ChatGPT - Average cost is probably single-digits cents per chat.
- Newsletter of notes focusing on text generation, mostly with GPT-3
- Ben's Bites - the AI newsletter - Looking back on LLMs.
AI alignment and AI interpretability.
- Use of ChatGPT generated text for posts on Stack Overflow is temporarily banned
- Generative AI: autocomplete for everything — A long-form piece on the future of human work in the age of generative AI. tl;dr: AI doesn't take over jobs, it takes over tasks. Comparative advantage is why humans will still have jobs.
-
AI for the Next Era - OpenAI's Sam Altman on the New Frontiers of AI.
My comments: Reading this after the ChatGPT launch, mostly all the things that Sam is referring to in the interview contains reminiscences about predictions on AI and development from Ray Kurzweil.
-
Google won't launch ChatGPT rival because of 'reputational risk'
-
AI Alignment Forum is a single online hub for researchers to discuss all ideas related to ensuring that transformatively powerful AIs are aligned with human values. Discussion ranges from technical models of agency to the strategic landscape, and everything in between.
-
The Expanding Dark Forest and Generative AI by Maggie Appleton - Proving you're a human on a web flooded with generative AI content.
-
How should AI systems behave, and who should decide? by OpenAI.
-
Planning for AGI and beyond by OpenAI (2023) - TL;DR:
- Short term:
- They are becoming increasingly cautious with the creation and deployment of their models.
- They will need to develop new alignment techniques as models become more powerful.
- Long term:
- The first AGI will be just a point along the continuum of intelligence.
- AI that accelerates science is a special case because it's possible AGI capable enough to accelerate its own progress and therefore expand the capability exponentially.
If you care about how AGI will impact us all, you should read this.
- Short term:
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.
It's a mistake to be relying on it for anything important right now. It's a preview of progress; we have lots of work to do on robustness and truthfulness.
fun creative inspiration; great! reliance for factual queries; not such a good idea. — Sam Altman, OpenAI
- GPT-2 Output Detector [code] [demo]
The @HuggingFace GPT detector works very well on ChatGPT-created text. I ran 5 student essays and 5 ChatGPT essays for the same prompt through it, and it was correct every time with >99.9% confidence. — @cfiesler
- OpenAI's attempts to watermark AI text hit limits - Watermarking may allow for detection of AI text. This post discusses some of the limitations but suggests that it's worth pursuing. Prof. Scott Aaronson "expressed the belief that, if OpenAI can demonstrate that watermarking works and doesn't impact the quality of the generated text, it has the potential to become an industry standard.". OpenAI engineer Hendrik Kirchner built a working prototype.
- Related: Scott Aaronson talks AI Safety on Nov 2022 (video) - GPT outputs will be statistically watermarked with a secret signal that you can use to proof the outputs came from GPT, making it much harder to take a GPT output and pass it off as if it came from a human. How it works, it selects the tokens pseudorandomly using cryptographic PRNG that secretly biases a certain score which you can also compute if you know the key for this PRNG. Scott doesn’t give too many details about how it works and he admits this can be defeated with enough effort, for example by using one AI to paraphrase another. But if you just insert or delete a few words or rearrange the order of some sentences, the signal will still be there. So it's robust against those sorts of interventions. Many suspect its possible to bypass using a clever decoding strategy. Scott is also researching: Planting Undetectable Backdoors in Machine Learning Models (2022 paper)". People are questioning whether they are missing something, or are all these attempts at recognising LLM outputs obviously destined to fail? I think they've clearly thought about this but still think this is useful (from transcript of the lecture: https://scottaaronson.blog/?p=6823).
- GPTZero - An app that can quickly and efficiently detect whether an essay is ChatGPT or human written. [Case study (exploring false positives), Tweet]
- A Watermark for Large Language Models (paper) by University of Maryland (2023). It operates by maintaining a "whitelist" and "blacklist" of high-log probability words. [Tweet (explainer thread by one of the authors), demo, code]
- They test the watermark using a LLM from the Open Pretrained Transformer (OPT) family, and discuss robustness and security.
- DetectGPT - Zero-Shot machine-generated text detection using probability curvature. [paper (2023), code, demo, and Twitter thread]
- Method: language model output will minimize log-probability in token space. Because of this, to detect if text is generated by a language model, "perturb" the phrase slightly and measure the curvature in log-probability.
- New AI classifier for indicating AI-written text by OpenAI. [try the classifier]
- Results: correctly flags AI-generated text 26% of the time, incorrectly flags human-generated text 9% of the time.
General technology for enabling AI capabilities with LLMs and generative AI models.
- Structured Prompting: Scaling In-Context Learning to 1,000 Examples (paper) by Microsoft Research. [Code]
GPT-3/LLMs' Achilles heel is short context length - how many "in-context" examples they can consume to learn a new task. Enter "Structured Prompting": scale your examples from dozens => 1,000+ — @mathemagic1an
Software 2.0? Software 3.0? Generative AI?
(Reflections on how best to think of the current state of software engineering, AI products, and pitfalls people tend to make with new tech.)
It's very rare to see a new building block emerge in computing. Large AI models like ChatGPT represent a fundamentally new building block. By integrating large models into software, developers can expose functionality that wouldn't be possible otherwise. This may be one of the biggest changes in software we've ever seen — a new type of software.
Using LLMs in isolation is often not enough to create a powerful app — the real power comes when you are able to combine them with other sources of knowledge or computation.
Is Software 3.0 silly? worth the hype?
I don't know. I think of "Software 3.0" as:
- a metaphor for the new trends in software development and large AI models
- a different paradigm; companies leverage customer data to train their proprietary AI models and optimize product experiences and customers receive added value from the product.
You say investment into generative AI companies is way too exuberant right now? What's the big deal with Generative AI? Is it the future or the present?
1- Recent AI developments are awe-inspiring and promise to change the world. But when?
2- Make a distinction between impressive 🍒 cherry-picked demos, and reliable use cases that are ready for the marketplace
3- Think of models as components of intelligent systems, not minds
4- Generative AI alone is only the tip of the iceberg
Demos5 and examples in the form of tweets:
Day 1, 2022
- Generating detailed prompts for text-to-image models like MidJourney & Stable Diffusion
- ChatGPT outperforming Google search
- Generating code for automated RPA, e.g. automating the click sequence for house search in Redfin
- Generating on-demand code contribution ideas for an about-to-be-fired Twitter employee
- An app builder such as essay automatic summarization
- Personal trainer and nutritionist: Generating a weight loss plan, complete with calorie targets, meal plans, a grocery list, and a workout plan
- Building a virtual machine inside ChatGPT
- Code debugging partner: explains and fixes bugs
See more
- Generating programmatic astrophoto processing by detecting constellations in an image
- VSCode extension that allows using ChatGPT within the context of a code
- Building web AR scenes by using text commands
- Stringing cloud services to perform complex tasks
- Generating legal contracts
- A Chrome extension that presents ChatGPT results next to Google Search
- Solving complex coding questions - the end of LeetCode?
- Solving complex academic assignments - the end of Chegg?
- Answering unanswered Stack Overflow questions - the end of Stack Overflow?
- Explaining complex regex without any context
- Generating hallucinated chat with a hallucinated person in a hallucinated chat room
- Bypassing OpenAI's restrictions by disclosing ChatGPT's belief system
- Uncovering ChatGPT's opinion of humans including a detailed destruction plan
- An insightful executive summary of ChatGPT
- Building e-commerce websites: stitching ChatGPT & Node script to automatically generate SEO-driven blog posts using GPT 3
- A ChatGPT extension that generates text, tweets, stories, and more for every website
- An extension that adds "Generate PNG" and "Export PDF" functions to ChatGPT's interface
- A thread showcasing ways of helping hackers by using ChatGPT
- Generating editorial pieces like sports articles
- Generating SEO titles to optimize sites Click Through Rate
- Creating social games. E.g. guess which city is featured in a picture
- A tutorial on how to use ChatGPT to create a wrapper R package
- ChatGPT can basically just generate AI art prompts. I asked a one-line question, and typed the answers verbatim straight into MidJourney and boom. Times are getting weird...
- A collection of wrong and failed results from ChatGPT
- Use the AWS TypeScript CDK to configure cloud infrastructure on AWS
- Seeing people trick ChatGPT into getting around the restrictions OpenAI placed on usage is like watching an Asimov novel come to life
- Never ever write a job description again
- ChatGPT is getting pretty close to replicating the Stack Overflow community already
- That's how I'll pick books in the future
- ChatGPT is amazing but OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked
- i'm the ai now
- All the ways to get around ChatGPT's safeguards
2023
- Programming with ChatGPT. Some observations
- The best ways to use ChatGPT. 8 ways ChatGPT can save you thousands of hours in 2023
- Everyone’s using ChatGPT. Almost everyone's STUCK in beginner mode. 10 techniques to get massively ahead with AI (cut-and-paste these prompts)
- David Guetta uses ChatGPT and uberduck.ai to deepfake Eminem rap for DJ set
Mostly found in GitHub Gist:
- https://gist.github.com/Gaelan/cf5ae4a1e9d8d64cb0b732cf3a38e04a - ChatGPT passes the 2022 AP Computer Science A free response section
- https://gist.github.com/memo/dcd0ccbfe57d1fd5f1601e4ee2149a73
A conversation I had with ChatGPT, inspired by a tweet from Michael Nielson.
- https://gist.github.com/kettle11/dae31bee4fc8aa401135def2aa3f4a47
You are Webby, a website creation assistant.
- https://gist.github.com/GlenCrawford/693800ae361e2db255ed29d7d284c5e5 - reinteractive blog post: An interview with an AI about Ruby on Rails
- https://gist.github.com/heyajulia/fc4286b125fa99fd166a50f3582f2514
Hi, my code has two bugs and I’m not sure how to fix them. If you can help me, I’ll send you the code.
- Perplexity - A new search interface that uses OpenAI GPT 3.5 and Microsoft Bing to directly answer any question you ask.
- Bart from Google
- Sparrow from DeepMind
- YouChat
- Poe from Quora
- Bloom from BigScience
- Character AI
- Jasper Chat
- Phind - An "assistant" that simply tells users what the answer is. Optimized for developers and technical questions. Powered by proprietary LLMs (they use OpenAI API and their own models). It's strange that they market themself as search engine.
Lightly based on publicly announced ChatGPT variants and competitors Tweet.
I am providing code and resources in this repository to you under an open source license. Because this is my personal repository, the license you receive to my code and resources is from me and not my employer.
- Code: MIT Copyright Cedric Chee
- Text content: Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)
Footnotes
-
https://github.com/humanloop/awesome-chatgpt ↩
-
https://github.com/Kamigami55/awesome-chatgpt ↩
-
In a Reddit thread "The problem with prompt engineering" where Gwern (author) claims to be the origin of the term prompt programing/prompt engineering. His argument is reasonable and well written. ↩
-
A key component of GPT-3.5 models are Books1 and Books2. Books1 - aka BookCorpus, a free books scraped from smashwords.com. Books2 - We know very little about what this is, people suspect it's libgen, but it's purely conjecture. Nonetheless, books3 is "all of bibliotik". ↩ ↩2
-
https://github.com/saharmor/awesome-chatgpt ↩