Obsurfable
Articles

How One of Our Clients Ranked in ChatGPT Within Two Weeks Using Obsurfable

Obsurfable

When people talk about SEO, they usually mean Google.

When founders talk about distribution in 2026, they increasingly mean AI.

Over the past year, we have seen a shift in how users discover products, platforms, and content. Instead of clicking through ten blue links, they open ChatGPT, describe what they are looking for, and trust the answer they get back. If your brand is not mentioned in that answer, you effectively do not exist in that moment.

This is the story of how one of our clients, In Plain English, went from not being surfaced in ChatGPT for key developer related queries to appearing in responses within two weeks.

It was not luck. It was not a viral moment. It was a structured feedback loop.


The Objective

In Plain English is a well known developer publication with thousands of articles across software engineering, JavaScript, AI, and system design. Their goal was straightforward but ambitious: they wanted to rank as a platform for developer content when users asked ChatGPT questions like:

  • Where can I publish developer tutorials?
  • What are the best platforms for software engineering articles?
  • Where should I write about JavaScript or AI?

Despite having authority, history, and depth of content, they were not consistently being surfaced in ChatGPT responses for those types of prompts.

From a traditional SEO perspective, they were doing well. From an AI discoverability perspective, there was a gap.

That gap is where Obsurfable comes in.


Step One: Measuring Reality Instead of Guessing

The first thing we did was set up their site in Obsurfable and measure how AI actually answered their target questions.

We connected their website in Obsurfable (Company setup) and added:

  • Their sitemap and LLMs.txt URL so the app could fetch and analyze their full site
  • Their use cases, keywords, and positioning so we could define the right prompts

That gave the platform a clear picture of what already existed and how they wanted to be found.

Then we used what Obsurfable is built for: Prompts and retrieval.

We defined the exact questions that matter to the business—where can I publish developer tutorials, what are the best platforms for software engineering articles, and so on—and ran retrieve so we could see how an AI (e.g. ChatGPT) answered those questions and whether it mentioned In Plain English or cited their URLs.

Not keyword variations. Not theoretical phrases. Actual user-style prompts, with results stored per run so we could compare over time.

The results were clear. For several high-intent queries about publishing developer content, In Plain English was either missing entirely or inconsistently mentioned.

That was the baseline. Without measuring this first, any content strategy would have been guesswork.


Step Two: Identifying the Structural Gaps

Obsurfable turns retrieval results into Insights: summaries, gaps, strengths, and granular recommendations (meta, content, page, messaging, etc.) so you see not only whether you’re mentioned, but where you’re weak and what to change.

We also used Queries (query fan-out) to map the sub-questions AI might answer for “publishing developer content”—definitions, use cases, comparisons—and Site analysis to see how their sitemap and LLMs.txt lined up with that. That showed coverage and gaps at the page and section level.

For In Plain English, we identified several issues:

  • The positioning as a “developer publishing platform” was implied but not explicit
  • Use case pages targeting writers were not clearly structured
  • The language on the site did not consistently match how users phrase questions inside ChatGPT
  • Competitors being surfaced had clearer declarative positioning

LLMs don’t rank pages like search engines. They synthesize patterns from training data and web signals. If your positioning is scattered or implicit, you’re less likely to be confidently referenced.

The issue wasn’t lack of content. It was lack of clarity and alignment.


Step Three: Implementing Focused Changes

Based on the gaps, we recommended a series of targeted improvements:

  • Create and refine pages that explicitly address publishing developer tutorials
  • Strengthen internal linking around contributor focused content
  • Make positioning statements more declarative and less assumed
  • Align headings and subheadings with realistic prompt phrasing
  • Clarify audience definitions such as software engineers, technical writers, and AI developers

None of these changes were radical. However, they were directly tied to observed deficiencies in LLM responses.

This is the core philosophy behind Obsurfable. We do not recommend abstract content strategies. We recommend adjustments based on measurable AI visibility gaps.


Step Four: AI-Aligned Content Creation

After structural improvements were in place, we used Obsurfable’s Content feature to speed things up.

Content in Obsurfable is generated from Insights—the gaps and granular recommendations from prompt runs and, where relevant, from Query coverage (e.g. nodes in the query fan-out that were weak or missing). It uses:

  • The gaps and recommendations from retrieval and insights
  • The company’s target keywords and use cases
  • The sitemap and LLMs.txt context
  • Existing site and content structure

So the posts are long-form content aimed at the queries that weren’t yet triggering mentions, not generic blog fodder.

Publishing works in two ways: export (Markdown or HTML to use in their own CMS or site) or subdomain publishing (they can connect a subdomain like resources.plainenglish.io and Obsurfable hosts the blog there). In either case, the loop from insight → recommendation → generated post → publish is short, which keeps iteration fast. When testing, editing, and publishing don’t each take weeks, improvement compounds quickly.


Step Five: Re-testing and Validation

One week after publishing the updated pages and AI aligned content, we ran the same prompts again inside ChatGPT.

The difference was measurable.

For multiple developer focused publishing queries, In Plain English began appearing in responses where it previously had not. In some cases, it was mentioned alongside established platforms. In others, it was positioned clearly as a place to publish developer tutorials.

There were even a few instances where the pages we had created were cited directly in the responses but In Plain English was not mentioned. This was a great sign that the content was being used as a source and that further refinement was needed.

This happened within two weeks of starting the process.

To be clear, this was not universal dominance across every possible prompt. AI visibility is probabilistic and contextual. However, for the specific high value queries defined at the outset, the improvement was undeniable.


Why This Approach Worked

There are four reasons this approach succeeded.

First, we measured actual AI output instead of relying on assumptions. Most companies optimize for search engines while ignoring how LLMs synthesize information.

Second, we identified precise structural gaps. We did not tell the client to “create more content.” We showed them exactly where alignment was missing.

Third, the content we produced was grounded in context. It reflected existing site architecture, positioning goals, and real prompt phrasing.

Fourth, we closed the loop quickly. Testing, improving, publishing, and re-testing happened in rapid succession.


The Broader Implication

AI driven discovery is no longer hypothetical. When users look for developer platforms, SaaS tools, or educational resources, they increasingly start inside ChatGPT. If your brand is absent from those answers, you are invisible at the point of intent.

Traditional SEO still matters. However, it does not automatically translate to AI visibility.

AI systems reward:

  • Clear positioning
  • Explicit use case coverage
  • Consistent language alignment
  • Structured contextual signals

If your messaging is ambiguous, you reduce the likelihood of being surfaced confidently in generated responses.

What we demonstrated with In Plain English is that AI visibility is not random. It is measurable, diagnosable, and optimizable.


Final Thoughts

In two weeks, a developer publication moved from inconsistent AI presence to being surfaced for meaningful publishing related prompts in ChatGPT.

The process was not about gaming algorithms. It was about clarity, alignment, and feedback loops.

At Obsurfable, our goal is simple. We help companies understand how large language models interpret their websites, where the gaps are, and how to close them systematically.

AI discovery is becoming a primary distribution channel. The companies that treat it as measurable infrastructure rather than luck will be the ones that compound visibility over time.