Prompt Types
LabsLLM supports different ways to send prompts to LLM providers. Below you'll find examples for the supported prompt types.
Text Generation
The most basic way to interact with a model is through simple text prompts:
$execute = LabsLLM::text()
->using(new OpenAI('your-api-key', 'gpt-4o'))
->executePrompt('Explain quantum computing in simple terms');
$response = $execute->getResponseData();
echo $response->response;
With System Instructions
System instructions allow you to guide the model's behavior and set the context for the conversation:
$execute = LabsLLM::text()
->using(new OpenAI('your-api-key', 'gpt-4o'))
->withSystemMessage('You are an expert in quantum physics explaining concepts to beginners.')
->executePrompt('What is quantum entanglement?');
$response = $execute->getResponseData();
echo $response->response;
Understanding the Response
When you call getResponseData()
, you receive an object with several properties that contain information about the interaction. The most commonly used is response
, which contains the text response from the model:
$response = $execute->getResponseData();
// Access the text response
echo $response->response;
The response object also contains information about tool calls and their results. For a detailed explanation of the response structure, see the API Response Object guide.
Tips for Effective Prompts
- Be specific: The more specific your prompt, the better the response
- Use system messages: Set context with system messages to guide the model's behavior
- Include examples: For complex tasks, include examples of desired outputs
- Chain prompts: For complex reasoning, break problems into multiple steps
For switching between different AI providers, see the Providers guide.