Skip to content

Prompt Types

LabsLLM supports different ways to send prompts to LLM providers. Below you'll find examples for the supported prompt types.

Text Generation

The most basic way to interact with a model is through simple text prompts:

php
$execute = LabsLLM::text()
    ->using(new OpenAI('your-api-key', 'gpt-4o'))
    ->executePrompt('Explain quantum computing in simple terms');

$response = $execute->getResponseData();

echo $response->response;

With System Instructions

System instructions allow you to guide the model's behavior and set the context for the conversation:

php
$execute = LabsLLM::text()
    ->using(new OpenAI('your-api-key', 'gpt-4o'))
    ->withSystemMessage('You are an expert in quantum physics explaining concepts to beginners.')
    ->executePrompt('What is quantum entanglement?');

$response = $execute->getResponseData();

echo $response->response;

Understanding the Response

When you call getResponseData(), you receive an object with several properties that contain information about the interaction. The most commonly used is response, which contains the text response from the model:

php
$response = $execute->getResponseData();

// Access the text response
echo $response->response;

The response object also contains information about tool calls and their results. For a detailed explanation of the response structure, see the API Response Object guide.

Tips for Effective Prompts

  1. Be specific: The more specific your prompt, the better the response
  2. Use system messages: Set context with system messages to guide the model's behavior
  3. Include examples: For complex tasks, include examples of desired outputs
  4. Chain prompts: For complex reasoning, break problems into multiple steps

For switching between different AI providers, see the Providers guide.

Released under the MIT License.