Skip to main content

OpenAI

The OpenAI can be applied to virtually any task that involves understanding or generating natural language, code, or images. We use the capabilities of OpenAI for semantic search for providers, creating transcription for videos, generating text at prompt.

The prompt is essentially how you β€œprogram” the model, usually by providing some instructions or a few examples. The completions and chat completions can be used for virtually any task including content or code generation, summarization, expansion, conversation, creative writing, style transfer, and more.

The OpenAI is powered by a set of models with different capabilities and price points. GPT-4 is latest and most powerful model. GPT-3.5-Turbo is the model that powers ChatGPT and is optimized for conversational formats. To learn more about models and what else we offer, visit models documentation.

With OpenAI integrated into Q-Consultation, developers now have access to a wide range of advanced AI features and capabilities. Let's dive into how to use this integration.

To work with the OpenAI API, we use the OpenAI Node.js Library. In the application, we have implemented our own service that uses this library and has the same API. This server is already configured for use with Fastify, so we advise you to use it. We also implement all cases for working with OpenAI in this service in the file apps/api/src/services/openai/integration.ts.

Completion​

Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.

Available models:

  • text-davinci-003
  • text-davinci-002
  • text-curie-001
  • text-babbage-001
  • text-ada-001

Create completion​

Creates a completion for the provided prompt and parameters.

To work with Create completion, we use our own OpenAI service with the method openAIApi.createChatCompletion.

Usage example
import { FastifyPluginAsyncTypebox } from '@fastify/type-provider-typebox'
import { Type } from '@sinclair/typebox'
import { openAIApi } from '@/services/openai'

export const completionSchema = {
tags: ['OpenAI Example'],
summary: 'OpenAI completion example',
body: Type.Object({
prompt: Type.String(),
}),
response: {
200: Type.Object({
text: Type.String(),
),
},
security: [{ apiKey: [] }] as Security,
}

const completion: FastifyPluginAsyncTypebox = async (fastify) => {
fastify.post(
'/completion',
{
schema: completionSchema,
onRequest: fastify.verify(
fastify.BearerToken,
),
},
async (request) => {
const { prompt } = request.body
const { data } = await openAIApi.createCompletion({
model: 'text-davinci-003',
prompt,
})
const text = data.choices[0]?.text

return {
text,
}
},
)
}

export default completion

Chat​

Given a list of messages describing a conversation, the model will return a response.

Available models:

  • gpt-4
  • gpt-4-0314
  • gpt-4-32k
  • gpt-4-32k-0314
  • gpt-3.5-turbo
  • gpt-3.5-turbo-0301

Create chat completion​

Creates a model response for the given chat conversation.

To work with Create chat completion, we use our own OpenAI service with the method openAIApi.createChatCompletion.

Usage example
import { FastifyPluginAsyncTypebox } from '@fastify/type-provider-typebox'
import { Type } from '@sinclair/typebox'
import { openAIApi } from '@/services/openai'

export const chatCompletionSchema = {
tags: ['OpenAI Example'],
summary: 'OpenAI chat completion example',
body: Type.Object({
prompt: Type.String(),
}),
response: {
200: Type.Object({
text: Type.String(),
),
},
security: [{ apiKey: [] }] as Security,
}

const chatCompletion: FastifyPluginAsyncTypebox = async (fastify) => {
fastify.post(
'/chat-completion',
{
schema: chatCompletionSchema,
onRequest: fastify.verify(
fastify.BearerToken,
),
},
async (request) => {
const { prompt } = request.body
const { data } = await openAIApi.createChatCompletion({
model: 'gpt-3.5-turbo',
temperature: 0.5,
messages: [
{
role: 'system',
content: 'You are a helpful assistant.',
},
{
role: 'user',
content: prompt,
},
],
})
const text = data?.choices?.[0].message?.content

return {
text,
}
},
)
}

export default chatCompletion

Audio​

Learn how to turn audio into text.

Available models: whisper-1

Related guide: Speech to text

Create transcription​

Transcribes audio into the input language.

To work with Create transcription, we use our own OpenAI service with methods createTranscriptionWithTime or openAIApi.createTranscription.

  • createTranscriptionWithTime is designed to get the transcription with time for an audio file.

    Internal Type
    const createTranscriptionWithTime: (audio: File) => Promise<
    {
    start: string
    end: string
    text: string
    }[]
    >
    Usage example
    import { FastifyPluginAsyncTypebox } from '@fastify/type-provider-typebox'
    import { Type } from '@sinclair/typebox'
    import { MultipartFile } from '@/models'
    import { createTranscriptionWithTime } from '@/services/openai'

    export const transcriptionSchema = {
    tags: ['OpenAI Example'],
    summary: 'OpenAI transcription example',
    consumes: ['multipart/form-data'],
    body: Type.Object({
    audio: MultipartFile,
    }),
    response: {
    200: Type.Object({
    transcription: Type.Array(
    Type.Object({
    start: Type.String(),
    end: Type.String(),
    text: Type.String(),
    }),
    ),
    ),
    },
    security: [{ apiKey: [] }] as Security,
    }

    const transcription: FastifyPluginAsyncTypebox = async (fastify) => {
    fastify.post(
    '/transcription',
    {
    schema: transcriptionSchema,
    onRequest: fastify.verify(
    fastify.BearerToken,
    ),
    },
    async (request) => {
    const { audio } = request.body
    const data = await createTranscriptionWithTime(audio)

    return {
    transcription: data,
    }
    },
    )
    }

    export default transcription
  • openAIApi.createTranscription is designed to get the transcription in different formats (default json) for an audio file.

    Usage example
    import { FastifyPluginAsyncTypebox } from '@fastify/type-provider-typebox'
    import { Type } from '@sinclair/typebox'
    import { MultipartFile } from '@/models'
    import { openAIApi } from '@/services/openai'

    export const transcriptionSchema = {
    tags: ['OpenAI Example'],
    summary: 'OpenAI transcription example',
    consumes: ['multipart/form-data'],
    body: Type.Object({
    audio: MultipartFile,
    }),
    response: {
    200: Type.Object({
    transcription: Type.String(),
    ),
    },
    security: [{ apiKey: [] }] as Security,
    }

    const transcription: FastifyPluginAsyncTypebox = async (fastify) => {
    fastify.post(
    '/transcription',
    {
    schema: transcriptionSchema,
    onRequest: fastify.verify(
    fastify.BearerToken,
    ),
    },
    async (request) => {
    const { audio } = request.body
    const { data } = await await openAIApi.createTranscription(
    audio,
    'whisper-1',
    )

    return {
    transcription: data.text
    }
    },
    )
    }

    export default transcription

Create translation​

Transcribes audio into the input language.

To work with Create translation, you can use our own OpenAI service with the method openAIApi.createTranslation.

Usage example
import { FastifyPluginAsyncTypebox } from '@fastify/type-provider-typebox'
import { Type } from '@sinclair/typebox'
import { MultipartFile } from '@/models'
import { openAIApi } from '@/services/openai'

export const translationSchema = {
tags: ['OpenAI Example'],
summary: 'OpenAI translation example',
consumes: ['multipart/form-data'],
body: Type.Object({
audio: MultipartFile,
}),
response: {
200: Type.Object({
text: Type.String(),
),
},
security: [{ apiKey: [] }] as Security,
}

const translation: FastifyPluginAsyncTypebox = async (fastify) => {
fastify.post(
'/translation',
{
schema: translationSchema,
onRequest: fastify.verify(
fastify.BearerToken,
),
},
async (request) => {
const { audio } = request.body
const { data } = await await openAIApi.createTranslation(
audio,
'whisper-1',
)

return data
},
)
}

export default translation
tip

We are not currently using Create translation, but you can read the OpenAI API reference on Create translation to learn more.

Images​

Given a prompt and/or an input image, the model will generate a new image.

tip

We are not currently using Generate images, but you can use our own OpenAI service to implement this.

Related guide: Image generation

API reference: Images

Embeddings​

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Available models:

  • text-embedding-ada-002
  • text-search-ada-doc-001
tip

We are not currently using Embeddings, but you can use our own OpenAI service to implement this.

Related guide: Embeddings

API reference: Create embeddings

Files​

Files are used to upload documents that can be used with features like Fine-tuning.

tip

We are not currently using Files, but you can use our own OpenAI service to implement this.

API reference: Files

Fine-tunes​

Manage fine-tuning jobs to tailor a model to your specific training data.

Available models:

  • davinci
  • curie
  • babbage
  • ada
tip

We are not currently using Fine-tunes, but you can use our own OpenAI service to implement this.

Related guide: Fine-tune models

API reference: Fine-tunes

Moderations​

Given a input text, outputs if the model classifies it as violating OpenAI's content policy.

Available models:

  • text-moderation-stable
  • text-moderation-latest
tip

We are not currently using Moderations, but you can use our own OpenAI service to implement this.

Related guide: Moderations

API reference: Create moderation