Most blogs do not lose traffic because the article is bad. They lose traffic because the publishing workflow forgets the boring-but-important SEO layer: a focused title, a search-friendly meta description, a clean slug, useful keywords, FAQ schema, and internal link suggestions. An AI-powered blog SEO generator is one of the highest-ROI AI features you can add to a content platform because it sits exactly where editors already work: after the draft exists, before the article goes live.
In this tutorial we will build a practical Laravel feature that takes a blog draft and returns structured SEO recommendations using either Ollama for local/private generation or OpenAI for hosted generation. The important part is not just calling an LLM. The important part is forcing the model to return predictable JSON, validating that JSON, and turning the output into fields your CMS can actually use.
What we are building
The generator will read a post title, short description, and content, then return:
- SEO title under 60 characters.
- Meta description around 150-160 characters.
- Slug suggestions based on the real search intent.
- Primary and secondary keywords for ranking signals.
- FAQ items that can be turned into JSON-LD schema.
- Internal link suggestions based on existing posts.
The final flow looks like this:
Editor clicks "Generate SEO"
↓
Laravel sends title + excerpt + cleaned article content to SeoGenerator
↓
Provider calls Ollama or OpenAI
↓
Laravel validates the JSON response
↓
Admin UI fills meta_title, meta_description, keywords, FAQ, and slug suggestions
Step 1 : Add configuration
Start with a small config file so you can switch between local and hosted models without changing application code.
// config/ai.php
return [
'seo_provider' => env('AI_SEO_PROVIDER', 'ollama'),
'ollama' => [
'base_url' => env('OLLAMA_BASE_URL', 'http://127.0.0.1:11434'),
'model' => env('OLLAMA_SEO_MODEL', 'llama3.1:8b'),
],
'openai' => [
'api_key' => env('OPENAI_API_KEY'),
'model' => env('OPENAI_SEO_MODEL', 'gpt-4o-mini'),
],
];
Your .env can start with Ollama for local development:
AI_SEO_PROVIDER=ollama
OLLAMA_BASE_URL=http://127.0.0.1:11434
OLLAMA_SEO_MODEL=llama3.1:8b
Then pull the model locally:
ollama pull llama3.1:8b
ollama serve
Step 2 : Create a structured DTO
Do not pass raw model output around your app. Convert it into a small data object so the rest of your CMS can trust the shape.
// app/Data/SeoSuggestionData.php
namespace App\Data;
class SeoSuggestionData
{
public function __construct(
public string $seoTitle,
public string $metaDescription,
public string $recommendedSlug,
public string $primaryKeyword,
public array $secondaryKeywords,
public array $faq,
public array $internalLinks,
) {}
public static function fromArray(array $data): self
{
return new self(
seoTitle: $data['seo_title'],
metaDescription: $data['meta_description'],
recommendedSlug: $data['recommended_slug'],
primaryKeyword: $data['primary_keyword'],
secondaryKeywords: $data['secondary_keywords'] ?? [],
faq: $data['faq'] ?? [],
internalLinks: $data['internal_links'] ?? [],
);
}
public function toArray(): array
{
return [
'seo_title' => $this->seoTitle,
'meta_description' => $this->metaDescription,
'recommended_slug' => $this->recommendedSlug,
'primary_keyword' => $this->primaryKeyword,
'secondary_keywords' => $this->secondaryKeywords,
'faq' => $this->faq,
'internal_links' => $this->internalLinks,
];
}
}
Step 3 : Build the prompt
The prompt should be strict. Tell the model the exact JSON shape, the character limits, and the editorial rules. Good AI features are less about magic prompts and more about clear contracts.
// app/Services/Ai/SeoPrompt.php
namespace App\Services\Ai;
use Illuminate\Support\Str;
class SeoPrompt
{
public static function build(array $post, array $existingPosts = []): string
{
$content = Str::of(strip_tags($post['content'] ?? ''))
->squish()
->limit(8000, '');
$links = collect($existingPosts)
->map(fn ($item) => "- {$item['title']} ({$item['slug']})")
->implode("\n");
return <<<PROMPT
You are an expert technical SEO editor for a developer blog.
Return only valid JSON. Do not wrap it in markdown.
Create SEO recommendations for this article:
Title: {$post['title']}
Short description: {$post['short_description']}
Content:
{$content}
Existing posts for internal linking:
{$links}
Rules:
- seo_title must be under 60 characters.
- meta_description must be between 145 and 160 characters.
- recommended_slug must be lowercase kebab-case.
- primary_keyword must be one focused search phrase.
- secondary_keywords must contain 6 to 10 search phrases.
- faq must contain 3 question/answer pairs.
- internal_links must contain up to 5 relevant existing post slugs with a reason.
JSON schema:
{
"seo_title": "string",
"meta_description": "string",
"recommended_slug": "string",
"primary_keyword": "string",
"secondary_keywords": ["string"],
"faq": [
{"question": "string", "answer": "string"}
],
"internal_links": [
{"slug": "string", "reason": "string"}
]
}
PROMPT;
}
}
Step 4 : Create provider classes
Both Ollama and OpenAI can sit behind the same interface. That lets you develop locally with Ollama and switch to a hosted model if you need stronger output quality for production content.
// app/Services/Ai/SeoProvider.php
namespace App\Services\Ai;
interface SeoProvider
{
public function generate(string $prompt): array;
}
Here is the Ollama implementation. The format: json option asks Ollama to return a JSON object instead of conversational text.
// app/Services/Ai/OllamaSeoProvider.php
namespace App\Services\Ai;
use Illuminate\Support\Facades\Http;
use RuntimeException;
class OllamaSeoProvider implements SeoProvider
{
public function generate(string $prompt): array
{
$response = Http::timeout(90)->post(
config('ai.ollama.base_url') . '/api/generate',
[
'model' => config('ai.ollama.model'),
'prompt' => $prompt,
'stream' => false,
'format' => 'json',
'options' => [
'temperature' => 0.2,
'num_ctx' => 8192,
],
]
);
if ($response->failed()) {
throw new RuntimeException('Ollama SEO generation failed.');
}
$json = $response->json('response');
$data = json_decode($json, true);
if (! is_array($data)) {
throw new RuntimeException('Ollama returned invalid JSON.');
}
return $data;
}
}
And here is the OpenAI version using Laravel's HTTP client. It asks for a JSON object and decodes the assistant message.
// app/Services/Ai/OpenAiSeoProvider.php
namespace App\Services\Ai;
use Illuminate\Support\Facades\Http;
use RuntimeException;
class OpenAiSeoProvider implements SeoProvider
{
public function generate(string $prompt): array
{
$response = Http::withToken(config('ai.openai.api_key'))
->timeout(60)
->post('https://api.openai.com/v1/chat/completions', [
'model' => config('ai.openai.model'),
'response_format' => ['type' => 'json_object'],
'messages' => [
['role' => 'system', 'content' => 'You return strict JSON for SEO automation.'],
['role' => 'user', 'content' => $prompt],
],
'temperature' => 0.2,
]);
if ($response->failed()) {
throw new RuntimeException('OpenAI SEO generation failed.');
}
$json = $response->json('choices.0.message.content');
$data = json_decode($json, true);
if (! is_array($data)) {
throw new RuntimeException('OpenAI returned invalid JSON.');
}
return $data;
}
}
Step 5 : Validate the AI output
Never save AI output before validation. Models can ignore limits, forget fields, or invent link slugs. A validation layer turns a flaky text generator into a reliable application feature.
// app/Services/Ai/SeoSuggestionValidator.php
namespace App\Services\Ai;
use Illuminate\Support\Facades\Validator;
use Illuminate\Support\Str;
use Illuminate\Validation\ValidationException;
class SeoSuggestionValidator
{
public function validate(array $data): array
{
$validator = Validator::make($data, [
'seo_title' => ['required', 'string', 'max:60'],
'meta_description' => ['required', 'string', 'min:120', 'max:170'],
'recommended_slug' => ['required', 'string', 'max:90'],
'primary_keyword' => ['required', 'string', 'max:80'],
'secondary_keywords' => ['required', 'array', 'min:3', 'max:10'],
'secondary_keywords.*' => ['required', 'string', 'max:80'],
'faq' => ['required', 'array', 'min:2', 'max:5'],
'faq.*.question' => ['required', 'string', 'max:160'],
'faq.*.answer' => ['required', 'string', 'max:400'],
'internal_links' => ['array', 'max:5'],
'internal_links.*.slug' => ['required', 'string', 'max:120'],
'internal_links.*.reason' => ['required', 'string', 'max:200'],
]);
if ($validator->fails()) {
throw new ValidationException($validator);
}
$validated = $validator->validated();
$validated['recommended_slug'] = Str::slug($validated['recommended_slug']);
return $validated;
}
}
Step 6 : Wire the generator service
The service loads nearby posts for internal links, builds the prompt, calls the active provider, validates the response, and returns a DTO.
// app/Services/Ai/BlogSeoGenerator.php
namespace App\Services\Ai;
use App\Data\SeoSuggestionData;
use App\Models\Post;
class BlogSeoGenerator
{
public function __construct(
private SeoSuggestionValidator $validator,
) {}
public function generateFor(Post $post): SeoSuggestionData
{
$provider = $this->provider();
$existingPosts = Post::query()
->whereKeyNot($post->id)
->where('published', true)
->latest('published_at')
->limit(20)
->get(['title', 'slug'])
->map(fn (Post $item) => [
'title' => $item->title,
'slug' => $item->slug,
])
->all();
$prompt = SeoPrompt::build([
'title' => $post->title,
'short_description' => $post->short_description,
'content' => $post->content,
], $existingPosts);
$raw = $provider->generate($prompt);
$validated = $this->validator->validate($raw);
return SeoSuggestionData::fromArray($validated);
}
private function provider(): SeoProvider
{
if (app()->bound(SeoProvider::class)) {
return app(SeoProvider::class);
}
return match (config('ai.seo_provider')) {
'openai' => app(OpenAiSeoProvider::class),
default => app(OllamaSeoProvider::class),
};
}
}
Step 7 : Add an admin endpoint
This endpoint can power a button in your admin panel. It returns suggestions without immediately overwriting the post, which gives the editor final control.
// routes/web.php
use App\Http\Controllers\Admin\PostSeoSuggestionController;
Route::middleware(['auth'])
->prefix('admin')
->name('admin.')
->group(function () {
Route::post('/posts/{post}/seo-suggestions', PostSeoSuggestionController::class)
->name('posts.seo-suggestions');
});
// app/Http/Controllers/Admin/PostSeoSuggestionController.php
namespace App\Http\Controllers\Admin;
use App\Http\Controllers\Controller;
use App\Models\Post;
use App\Services\Ai\BlogSeoGenerator;
class PostSeoSuggestionController extends Controller
{
public function __invoke(Post $post, BlogSeoGenerator $generator)
{
$this->authorize('update', $post);
return response()->json([
'data' => $generator->generateFor($post)->toArray(),
]);
}
}
Step 8 : Apply suggestions safely
When the editor accepts the suggestion, save only the fields your content model already supports. For example, if your posts table has meta_title, meta_description, and meta_keywords, the update can be boring and safe:
$post->update([
'meta_title' => $suggestion['seo_title'],
'meta_description' => $suggestion['meta_description'],
'meta_keywords' => implode(', ', [
$suggestion['primary_keyword'],
...$suggestion['secondary_keywords'],
]),
]);
If you want FAQ schema, store the FAQ array in a JSON column such as seo_faq, then render it as JSON-LD on the public article page:
@if (! empty($post->seo_faq))
<script type="application/ld+json">
{!! json_encode([
'@context' => 'https://schema.org',
'@type' => 'FAQPage',
'mainEntity' => collect($post->seo_faq)->map(fn ($item) => [
'@type' => 'Question',
'name' => $item['question'],
'acceptedAnswer' => [
'@type' => 'Answer',
'text' => $item['answer'],
],
])->values(),
], JSON_UNESCAPED_SLASHES | JSON_UNESCAPED_UNICODE) !!}
</script>
@endif
Step 9 : Queue it for real publishing workflows
LLM calls can take several seconds, especially with local models. For a smoother editor experience, dispatch a job and notify the UI when the result is ready.
// app/Jobs/GeneratePostSeoSuggestions.php
namespace App\Jobs;
use App\Models\Post;
use App\Services\Ai\BlogSeoGenerator;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;
class GeneratePostSeoSuggestions implements ShouldQueue
{
use Queueable;
public function __construct(public int $postId) {}
public function handle(BlogSeoGenerator $generator): void
{
$post = Post::findOrFail($this->postId);
$suggestions = $generator->generateFor($post);
cache()->put(
"post:{$post->id}:seo-suggestions",
$suggestions->toArray(),
now()->addMinutes(30)
);
}
}
Testing the generator
The best test is not whether the model is smart. The best test is whether your app behaves correctly when the model returns valid data, invalid data, or slow responses. Mock the provider and test the service boundary.
use App\Models\Post;
use App\Services\Ai\BlogSeoGenerator;
use App\Services\Ai\SeoProvider;
it('generates validated seo suggestions for a post', function () {
app()->bind(SeoProvider::class, fn () => new class implements SeoProvider {
public function generate(string $prompt): array
{
return [
'seo_title' => 'AI SEO Generator for Laravel',
'meta_description' => 'Build an AI-powered SEO generator in Laravel with structured metadata, keyword ideas, FAQ schema, and safe validation.',
'recommended_slug' => 'ai-seo-generator-laravel',
'primary_keyword' => 'AI SEO generator Laravel',
'secondary_keywords' => ['Laravel AI', 'SEO automation', 'meta description generator'],
'faq' => [
['question' => 'Can AI generate SEO metadata?', 'answer' => 'Yes, when the output is validated and reviewed before publishing.'],
['question' => 'Should editors review AI SEO output?', 'answer' => 'Yes. AI should assist the workflow, not replace editorial judgment.'],
],
'internal_links' => [],
];
}
});
$post = Post::factory()->create([
'title' => 'Build an AI SEO Generator',
'short_description' => 'Use AI to improve blog metadata.',
'content' => '<p>Long article content here.</p>',
]);
$suggestions = app(BlogSeoGenerator::class)->generateFor($post);
expect($suggestions->primaryKeyword)->toBe('AI SEO generator Laravel');
});
Production tips
- Keep humans in the loop. Let AI suggest metadata, but let editors approve it.
- Store prompt versions. When rankings improve or decline, you need to know which prompt generated the metadata.
- Cache by content hash. If the article has not changed, reuse the previous suggestion instead of paying for another generation.
- Limit input size. Send enough content for context, not the entire database row.
- Track acceptance rate. If editors constantly rewrite AI output, improve the prompt or switch models.
Why this feature is worth building
This is the kind of AI feature that fits naturally into a real product. It does not ask writers to learn a new workflow. It helps them finish the work they already have to do. A good SEO generator turns every article into a more complete publishing package: better snippets in search results, cleaner keywords, richer FAQ schema, and smarter internal links.
The winning AI features in content platforms will not be giant chat boxes. They will be small, focused tools that remove friction from publishing.
Start with metadata generation, then expand into content briefs, competitor gap analysis, automatic schema, and internal link recommendations. That gives your blog a practical AI layer without turning the CMS into a science project.