! OpenAI AI-Services

<< Click to Display Table of Contents >>

Navigation:  3. Script Language > AI - Artificial Intelligence Commands > AIC. - Artificial Intelligence Command >

! OpenAI AI-Services

 

Accessing Artificial Intelligence Services

 

Welcome to the cutting-edge of Artificial Intelligence services provided by industry leaders such as OpenAI.

 

clip0581

This is how the AI imagines its own Services ("Open AI").

 

The SPR, now supercharged with OpenAI's GPT (Generative Pre-trained Transformer) models, has been trained to understand not just code, but also natural language. These GPT models react to their inputs, known as "prompts", and produce textual outputs. Crafting a prompt is akin to guiding your SPR commands, typically by providing instructions or examples to successfully accomplish a task.

 

With the GPT-powered SPR, you can develop applications that can:

 

Draft documents

Generate computer code

Respond to queries about a knowledge base

Analyze texts

Construct conversational agents

Equip software with a natural language interface

Tutor in a variety of subjects

Translate languages

Simulate characters for games

… and so much more!

 

To interact with our advanced GPT models, you'll first need to initialize them via the SPR.

This requires setting your API-Key and choosing the model of your choice.

 

We offer a range of models, starting from GPT-1, our first language model using the Transformer architecture, to our latest GPT-4 model, a significant advancement with a staggering one trillion parameters. Each model has its unique strengths and was built upon the achievements and improvements of its predecessors, allowing for increasingly sophisticated responses and capabilities.

 

As AI is constantly developed, there will be newer model available and you will be able to use them with the SPR also.

 

After initializing - you need to set your API-Key and choose a model - ,

you can send requests and expect a return from the AI. Technically your requests are called "PROMPTs".

In return, you'll receive a response encapsulating the chosen model's output.

 

Do I need to learn and implement JSON? 🌌

I don't think so. Generally the SPR does all the JSON Stuff for you, "under the hood". Of course JSON is there, but its invisible to the User.
There may be few cases where you want to "Escape-"/ or "UnEscape" Strings, for these cases the needed commands are also available within the AIC.-Command.

 

Open AI-Models? 🌌

Our newest models, GPT-4 and GPT-3.5-turbo, can be utilized through the chat completions API endpoint.
Using the SPR you will just use the AIC.Ask_Chat - Command and not care about further details.

Currently, only the older legacy models are accessible via the completions API endpoint. These are cheaper and available using the AIC.Ask_Completion Commands.

You just choose the right command, anything else is done for you inside the SPR, under the hood.

 

In addition to our AI services, if you're interested in top-tier speech synthesis, Elevenlabs' services are also readily accessible through these commands.

You will find more details below.

 

To use these AI commands, you will need to obtain a commercial API-Key from OpenAI. This key can be obtained after logging in to your OpenAI account. You can log in and get your API-Key [click Button below].

 

With these instructions and your API-Key, you have a gateway to harnessing the power of our advanced GPT models. Happy experimenting!

 

 

Unlock the Power of OpenAI: Your API Key Guide 🚀

 

Welcome, intrepid coder! You’re about to embark on a thrilling journey through the cosmos of artificial intelligence with OpenAI. But before you launch, you’ll need the golden ticket - your OpenAI API Key! This key is not just a string of characters; it's your passport to a universe of possibilities. Let's get you acquainted with this magical artifact.

 

What is an OpenAI API Key? 🗝️

 

Imagine a mystical key that opens the gates to an enchanted castle. In the world of OpenAI, the API Key is just that! It's a unique combination of letters, numbers, and symbols that authenticates and secures your gallant quests through OpenAI's services.

 

The Majestic Format

 

Your API Key is like a spell, and every spell has its incantation. The OpenAI API Key typically begins with a sacred prefix - `sk-` or `pk-`, followed by a series of alphanumeric characters. The prefixes are the guardians; `sk-` stands for Secret Key, while `pk-` stands for Public Key.

 

Here’s what it might look like in all its glory:

 

```

sk-enchantedforest789magiccastle

```

 

or

 

```

pk-dragonslayer123wizardkingdom

```

 

Behold! But remember, these are mere illustrations and not actual keys. Your key will be unique, just like a wizard's wand.

 

Why is it Precious? 💎

 

This key is your identity in the realm of OpenAI. It whispers to the gates, letting them know you are a worthy traveler. Without it, the doors remain closed. With it, you command the clouds, access mystical functionalities, and conjure powerful AI magic. Guard it with your life!

 

Safeguarding Your Treasure 🛡️

 

Your API Key is akin to a royal seal. In the wrong hands, kingdoms can fall. Keep it secret; keep it safe. Never share it with others, and especially be cautious of where you enter it. It is bound to your account and holds the power to access your resources and data.

To help you safeguard your API-Key the SPR supports you with the Option to save the API-Key in a Crypted way into a "Keyfile" that you can use for all your Scripts and even include in your executable. This way the API-Key is never seen in Clear-Text in your Scripts.

Alternatively you could also use the Datamaker and make the API-Key to Inline-Code.

 

Ready to Embark? 🌌

 

With your API Key in hand, you are ready to set sail across the boundless seas of artificial intelligence. Harness the power, create wonders, and let your imagination be your compass.

 

Bon voyage, brave explorer! 🚀

 

 

clip0577

 

 

Understanding OpenAI Usage Costs

 

Diving into the world of Artificial Intelligence with OpenAI involves a fascinating behind-the-scenes process.
As you tap into the power of AI, you're essentially interacting with sophisticated Language Learning Models (LLMs). Each interaction is quantified using a measure called "tokens".

 

Imagine tokens as the fundamental building blocks of language comprehension and generation in AI.
Every word, punctuation mark, or piece of whitespace your AI model processes or produces is counted as tokens.
For instance, the sentence "Hello, world!" would be translated into tokens like this: "Hello" (1 token), "," (1 token), " " (1 token), "world" (1 token), "!" (1 token).

In most cases, a word accounts for about two tokens, but this can vary depending on the complexity and length of the word.

 

Now, here's where it gets interesting.

The cost of using OpenAI's services is tied directly to the number of these tokens you utilize and the type of LLM you engage with.

The more tokens your requests consume, and the more advanced the LLM you select, the higher the cost. But worry not!

These charges are typically quite modest, and they fund the continuous development and enhancement of these remarkable AI services.

 

So, every time you command an AI model to draft a poem, translate a sentence, or even answer a trivia question, remember:

you're actually engaging with a complex dance of tokens, all buzzing to create a symphony of AI intelligence.

And that's what makes your journey with OpenAI not just a service, but an experience.

 

Optimized for dialogue, the performance of ChatGPT models is impressive, costing at this time only $0.0015 for input and $0.002 for output per 1,000 tokens​.

Other models are designed to follow single-turn instructions.

 

See Details about Pricing here (22-06-2023):

The cost varies from as low as $0.0004 to as high as $0.02 per 1,000 tokens, depending on the model you choose - from Ada, the fastest, to Davinci, the most powerful​​.

 

The AI-Models that can be used with the SPR are currently:

 

 "gpt-3.5-turbo-0301"  0,002 Cent  / 1K Tokens

 "text-davinci-003"

 "text-davinci-002"

 "code-davinci-002"

 "text-curie-001"

 "text-babbage-001"    0,0005 Cent  / 1K Tokens

 "text-ada-001"        0,0004 Cent  / 1K Tokens

 "gpt-4-0314" 

 

On this page from OpenAI you can track your Usage of these services.

You can also set Limits for maximum usage, if desired.

 

 

clip0576

 

 

Important News from July, 6

Source: Open AI Blog

 

Starting January 4, 2024, older completion models will no longer be available, 
and will be replaced with the following models:

 

 

Older model

New model

ada

ada-002

babbage

babbage-002

curie

curie-002

davinci

davinci-002

davinci-instruct-beta

gpt-3.5-turbo-instruct

curie-instruct-beta

text-ada-001

text-babbage-001

text-curie-001

text-davinci-001

text-davinci-002

text-davinci-003

 

Applications using the stable model names for base GPT-3 models (ada, babbage, curie, davinci) will automatically be upgraded to the new models listed above on January 4, 2024. The new models will also be accessible in the coming weeks for early testing by specifying the following model names in API calls: ada-002, babbage-002, curie-002, davinci-002.

 

Developers using other older completion models (such as text-davinci-003) will need to manually upgrade their integration by January 4, 2024 by specifying gpt-3.5-turbo-instruct in the “model” parameter of their API requests. gpt-3.5-turbo-instruct is an InstructGPT-style model, trained similarly to text-davinci-003. This new model is a drop-in replacement in the Completions API and will be available in the coming weeks for early testing.

 

Developers wishing to continue using their fine-tuned models beyond January 4, 2024 will need to fine-tune replacements atop the new base GPT-3 models (ada-002, babbage-002, curie-002, davinci-002), or newer models (gpt-3.5-turbo, gpt-4). Once this feature is available later this year, we will give priority access to GPT-3.5 Turbo and GPT-4 fine-tuning to users who previously fine-tuned older models. We acknowledge that migrating off of models that are fine-tuned on your own data is challenging. We will be providing support to users who previously fine-tuned models to make this transition as smooth as possible.

 

In the coming weeks, we will reach out to developers who have recently used these older models, and will provide more information once the new completion models are ready for early testing.

 

Deprecation of the Edits API

Users of the Edits API and its associated models (e.g., text-davinci-edit-001 or code-davinci-edit-001) will need to migrate to GPT-3.5 Turbo by January 4, 2024. The Edits API beta was an early exploratory API, meant to enable developers to return an edited version of the prompt based on instructions.

We took the feedback from the Edits API into account when developing gpt-3.5-turbo and the Chat Completions API, which can now be used for the same purpose.