Tapa blanda, 214 páginas
Publicado el 3 de Abril de 2023
Tapa blanda, 214 páginas
Publicado el 3 de Abril de 2023
Here is the definitive technical guide to GPT-4 as well as its loquacious counterpart, ChatGPT. Along with step-by-step examples for prompt engineering and fine tuning, the book looks at the current discussions around the technology's promise and peril. Includes a 2-year subscription to GPTAnalytica's PromptBuilder tool.
Contents:
1 Preface 2 A short history of intelligence . . 2.1 What is “intelligence”? . . 2.2 Intelligence and humans . . 2.3 Intelligence and computing . . 2.4 Artificial intelligence . . 2.5 Generative AI . . 2.6 Conversant AI . . 2.7 The Promethean Moment
3 Models and sources . . 3.1 Natural Language Processing (NLP) . . 3.2 Language Modeling (LM) . . 3.3 Pre-GPT Language Models . . 3.4 GPT Language Models . . . . 3.4.1 From data to training set . . . . 3.4.2 Limitations and bias . . 3.5 Common Crawl . . 3.6 WebText …
Here is the definitive technical guide to GPT-4 as well as its loquacious counterpart, ChatGPT. Along with step-by-step examples for prompt engineering and fine tuning, the book looks at the current discussions around the technology's promise and peril. Includes a 2-year subscription to GPTAnalytica's PromptBuilder tool.
Contents:
1 Preface 2 A short history of intelligence . . 2.1 What is “intelligence”? . . 2.2 Intelligence and humans . . 2.3 Intelligence and computing . . 2.4 Artificial intelligence . . 2.5 Generative AI . . 2.6 Conversant AI . . 2.7 The Promethean Moment
3 Models and sources . . 3.1 Natural Language Processing (NLP) . . 3.2 Language Modeling (LM) . . 3.3 Pre-GPT Language Models . . 3.4 GPT Language Models . . . . 3.4.1 From data to training set . . . . 3.4.2 Limitations and bias . . 3.5 Common Crawl . . 3.6 WebText data set . . . . 3.6.1 Test set . . 3.7 Wikipedia . . 3.8 Quality of sources
4 GPT-3 . . 4.1 Tokens . . 4.2 Parameters . . 4.3 GPT-3 and ChatGPT
5 GPT-4
6 ChatGPT
7 Using GPT and ChatGPT in OpenAI . . 7.1 Playground . . . . 7.1.1 Mode . . . . 7.1.2 Model . . . . 7.1.3 Temperature . . 7.2 ChatGPT playground . . 7.3 Get your API key . . 7.4 Programmatic use of OpenAI . . . . 7.4.1 Import the openai library . . . . 7.4.2 An example chat API call
8 OpenAI via Python
9 OpenAI via Node.js
10 OpenAI .NET API
11 Prompt engineering . . 11.1 Misunderstanding in human communication . . 11.2 Misunderstanding in ChatGPT . . 11.3 Model capabilities depend on context . . 11.4 How to improve reliability on complex tasks . . . . 11.4.1 Provide quality data . . . . 11.4.2 Check your settings . . . . 11.4.3 Use plain language to describe your inputs and outputs . . . . 11.4.4 Show the API how to respond to any case . . . . 11.4.5 Add context . . . . 11.4.6 Include helpful information up-front . . . . 11.4.7 Give examples . . . . 11.4.8 Length of response . . . . 11.4.9 Define a role . . . . 11.4.10 Be more specific . . . . 11.4.11 Divide a complex task into simpler tasks . . . . 11.4.12 Prompt the model to explain before answering . . . . 11.4.13 Ask for explanations before the answer
12 Fine tuning with a custom dataset . . 12.1 Extract data into a csv file . . 12.2 Check the headers in OpenAI . . 12.3 Playground . . 12.4 Create Prompt and Completion Pairs . . 12.5 Prepare for GPT . . 12.6 Fine-tune a GPT model with your data . . 12.7 Interact with your fine-tuned model
13 Robust fine tuning . . 13.1 Creating a robust, fine-tuned GPT model . . . . 13.1.1 Step 1: Data preparation . . . . 13.1.2 Step 2: Model architecture selection . . . . 13.1.3 Step 3: Model training . . . . 13.1.4 Step 4: Model evaluation
14 Self-taught reasoner
15 Data retrieval plug-in . . 15.1 Plugins . . 15.2 Retrieval Plugin . . 15.3 Memory Feature . . 15.4 Security . . 15.5 API Endpoints . . 15.6 Quickstart
16 Additional techniques . . 16.1 Selection-inference prompting . . 16.2 Faithful reasoning architecture . . 16.3 Least-to-most prompting
17 Act-as prompts
18 Prompt templates
19 Template libraries
20 Prompt generators
21 GPTAnalytica.com