How
to shape your career path for 2026, with decision trees for designers
and a UX skills self-assessment matrix. The only limits for tomorrow are
the doubts we have today.
As
the new year begins, I often find myself in a strange place —
reflecting back at the previous year or looking forward to the year
ahead. And as I speak with colleagues and friends at the time, it
typically doesn’t take long for a conversation about career trajectory to emerge.
So I thought I’d share a few thoughts on how to shape your career path as we are looking ahead to 2026. Hopefully you’ll find it useful.
Run A Retrospective For Last Year
To be honest, for many years, I was mostly reacting. Life was happening to me, rather than me shaping the life that I was living. I was making progress reactively
and I was looking out for all kinds of opportunities. It was easy and
quite straightforward — I was floating and jumping between projects and
calls and making things work as I was going along.
Diverse career paths for UX Designers,
a helpful overview by Lili Yue. You might find yourself doing a little
bit of everything in this overview — but you need to know where you want
to go next.
Years ago, my wonderful wife introduced one little annual ritual which changed that dynamic entirely. By the end of each year, we sit with nothing but paper and pencil and run a thorough retrospective of the past year — successes, mistakes, good moments, bad moments, things we loved, and things we wanted to change.
We
look back at our memories, projects, and events that stood out that
year. And then we take notes for where we stand in terms of personal
growth, professional work, and social connections — and how we want to grow.
These are the questions I’m trying to answer there:
What did I find most rewarding and fulfilling last year?
What fears and concerns slowed me down the most?
What could I leave behind, give away or simplify?
What tasks would be good to delegate or automate?
What are my 3 priorities to grow this upcoming year?
What times do I block in my calendar for my priorities?
It probably sounds quite cliche, but these 4–5h of our time every year set a foundation for changes
to introduce for the next year. This little exercise shapes the
trajectory that I’ll be designing and prioritizing next year. I can’t
recommend it enough.
Another little tool that I found helpful for professional growth is UX Skills Self-Assessment Matrix
(Figma template) by Maigen Thomas. It’s a neat little tool that’s
designed to help you understand what you’d like to do more of, what
you’d prefer to do less, and where your current learning curve lies vs. where you feel confident in your expertise.
A neat little tool to identify where you stand, what you want to do less of, more of, and what you’d like to learn.
The exercise typically takes around 20–30 minutes, and it helps identify the UX skills with a sweet spot
— typically the upper half of the canvas. You’ll also pinpoint areas
where you’re improving, and those where you are already pretty good at.
It’s a neat reality check — and a great reminder once you review it year
after year. Highly recommended!
UX Career Levels For Design Systems Teams
A while back, Javier Cuello
has put together a Career Levels For Design System Teams (Figma Kit), a
neat little helper for product designers looking to transition into
design systems teams or managers building a career matrix for them. The
model maps progression levels (Junior, Semi-Senior, Senior, and Staff)
to key development areas, with skills and responsibilities required at
each stage.
Career Levels For Design System Teams (Figma Kit). Kindly put together by Javier Cuello.
What
I find quite valuable in Javier’s model is the mapping of strategy and
impact, along with systematic thinking and governance. While as
designers we often excel at tactical design — from elegant UI components
to file organization in Figma — we often lag a little bit behind in
strategic decisions.
To a large extent, the difference between
levels of seniority is moving from tactical initiatives to strategic
decisions. It’s proactively looking for organizational challenges that a
system can help with. It’s finding and inviting key people early. It’s
also about embedding yourself in other teams when needed.
But it’s
also keeping an eye out for situations when design systems fail, and
paving the way to make it more difficult to fail. And: adapting the
workflow around the design system to ship on a tough deadline when
needed, but with a viable plan of action on how and when to pay back
accumulating UX debt.
Find Your Product Design Career Path
When we speak about career trajectory, it’s almost always assumed that the career progression inevitably leads to management. However, this hasn’t been a path I preferred, and it isn’t always the ideal path for everyone.
Personally, I prefer to work on intricate fine details of UX flows and deep dive into complex UX challenges.
However, eventually it might feel like you’ve stopped growing — perhaps
you’ve hit a ceiling in your organization, or you have little room for
exploration and learning. So where do you go from there?
The Mirror Model (PDF) is a helpful way to visualize creative and managerial paths with equivalent influence and compensation.
A helpful model to think about your next steps is to consider Ryan Ford’s Mirror Model. It explores career paths and expectations that you might want to consider to advocate for a position or influence that you wish to achieve next.
That’s typically something you might want to study and decide on your own first,
and then bring it up for discussion. Usually, there are internal
opportunities out there. So before changing the company, you can switch
teams, or you could shape a more fulfilling role internally.
You just need to find it first. Which brings us to the next point.
Proactively Shaping Your Role
I keep reminding myself of Jason Mesut’s
observation that when we speak about career ladders, it assumes that we
can either go up, down, or fall off. But in reality, you can move up, move down, and move sideways.
As Jason says, “promoting just the vertical progression doesn’t feel
healthy, especially in such a diverse world of work, and diverse careers
ahead of us all.”
So, in the attempt to climb up, perhaps consider also moving sideways. Zoom out and explore
where your interests are. Focus on the much-needed intersection between
business needs and user needs. Between problem space and solution
space. Between strategic decisions and operations. Then zoom in. In the
end, you might not need to climb anything — but rather just find that
right spot that brings your expertise to light and makes the biggest
impact.
Sometimes these roles might involve acting as a “translator” between design and engineering, specializing in UX and accessibility. They could also involve automating design processes with AI, improving workflow efficiency, or focusing on internal search UX or legacy systems.
These roles are never advertised, but they have a tremendous impact
on a business. If you spot such a gap and proactively bring it to
senior management, you might be able to shape a role that brings your
strengths into the spotlight, rather than trying to fit into a
predefined position.
One noticeable skill that is worth sharpening is, of course, around designing AI experiences.
The point isn’t about finding ways to replace design work with AI
automation. Today, it seems like people crave nothing more than actual
human experience — created by humans, with attention to humans’ needs
and intentions, designed and built and tested with humans, embedding
human values and working well for humans.
Design Patterns For AI Interfaces, a quick overview by Sharang Sharma.
If anything, we should be more obsessed with humans,
not with AI. If anything, AI amplifies the need for authenticity,
curation, critical thinking, and strategy. And that’s a skill that will
be very much needed in 2026. We need designers who can design beautiful
AI experiences (and frankly, I do have a whole course on that) — experiences people understand, value, use, and trust.
No technology can create clarity, structure, trust, and care
out of poor content, poor metadata, and poor value for end users. If we
understand the fundamentals of good design, and then design with humans
in mind, and consider humans’ needs and wants and struggles, we can
help users and businesses bridge that gap in a way AI never could. And
that’s what you and perhaps your renewed role could bring to the table.
Wrapping Up
The most important thing about all these little tools and activities is that they help you get more clarity. Clarity on where you currently stand and where you actually want to grow towards.
These are wonderful conversation starters to help you find a path you’d love to explore, on your own or with your manager. However, just one thing I’d love to emphasize:
Absolutely,
feel free to refine the role to amplify your strengths, rather than
finding a way to match a particular role perfectly.
Don’t forget: you bring incredible value
to your team and to your company. Sometimes it just needs to be
highlighted or guided to the right spot to bring it into the spotlight.
Artificial Intelligence is advancing faster than any workplace
technology we’ve seen before. Leaders across business, government, and
civil society are optimistic—and for good reason. TheWorld Economic Forumestimates AI could contribute nearly14% of global GDP by 2030, translating to a $15+ trillion economic opportunity.
AI adoption is accelerating, butenterprise-level productivity gains remain inconsistent and fragile.
Employees report saving time with AI tools, yet organizations and
economies are not seeing sustained performance improvements.
Productivity growth in major economies remains uneven, even as AI
becomes embedded in daily work.
This signals a hard truth:technology alone does not create productivity. Systems do.
The Real Gap Is Not AI Adoption — It’s Learning
Most organizations are still approaching AI transformation in the wrong sequence:
Deploy new technology
Add training later
This “technology-first, people-second” model no longer works.
True productivity emerges whenhumans and AI are designed to learn together, continuously, within real workflows. That means rethinking not just tools—but how work itself is structured.
Traditional upskilling models—offsite workshops, static courses, one-size-fits-all training—are too slow for an AI-driven world.
What’s needed is alearning system embedded directly into work, enabling people to adapt as roles evolve in real time.
To address this challenge, a practical framework emerges:DEEP.
The DEEP Framework: Embedding Learning into AI Transformation
1. Diagnose
Organizations must analyze work at the task level—not job
titles—to understand where AI can meaningfully augment human
performance. This requires collaboration between domain experts, early
AI adopters, and cross-functional “augmentation squads” focused on real
use cases.
2. Embed
Learning should happen in the flow of work. AI can deliver
just-in-time coaching, personalized feedback, and contextual guidance
while employees are doing their jobs. This also requires a culture that
rewards experimentation, knowledge sharing, and durable human skills
like creativity, judgment, and critical thinking.
3. Evaluate
Modern learning systems need real skills data. AI can help infer
capabilities from behavior and work outputs, enabling continuous
assessment, smarter recommendations, and personalized development
paths—without disrupting productivity.
4. Prioritize
Learning and Development must evolve from content delivery tocapability architecture.
This means skill-based workforce planning, strong executive
sponsorship, and portable, verifiable skill records that follow
individuals across roles and careers.
Why This Matters Now
AI transformation is not a one-time rollout—it’s a continuous
cycle of change. Organizations that treat learning as a side initiative
will struggle to keep pace. Those that embed learning into everyday work
will move faster, adapt better, and scale productivity sustainably.
The real competitive advantage won’t come from who adopts AI first—but fromwho learns fastest.
The Path to an Augmented Future
To unlock AI’s $15 trillion promise, leaders must move beyond the
false choice between automation and human labor. The future belongs to
organizations that invest equally inAI capability and human learning, designing systems where both evolve together.
AI doesn’t replace people. People who learn with AI will outperform those who don’t.
Imagine a tutor who doesn’t just answer your questions—butanticipates them.
A tutor that recognizes when you’re strugglingbeforeyou
ask for help. One that changes its teaching strategy mid-lesson, asks
the right Socratic questions instead of giving answers, and adapts
continuously tohowyouthink, learn, and apply knowledge.
Capable of planning and executing multi-step actions
Able to learn from interaction and feedback
In education, this marks a fundamental change.
Instead of waiting for students to ask questions,agentic AI tutors actively manage the learning journey—guiding, nudging, diagnosing, and adjusting in real time.
Agentic AI tutors, on the other hand, function more like a personal learning coach:
Proactive and anticipatory
Focused on long-term learning goals
Continuously adapting to learner behavior
Optimizing not just answers, but understanding
The difference isn’t incremental—it’s transformational.Why Agentic AI Tutoring Matters Now
The limitations of one-size-fits-all education are well known. Agentic AI tutoring directly addresses them by enabling:
Hyper-Personalization at Scale
Every learner follows a unique path—pace, content, difficulty, and teaching method adapt in real time.
Proactive Intervention
Instead of discovering learning gaps after an exam, agentic tutors
detect early signals—hesitation, repeated errors, conceptual drift—and
intervene immediately.
By handling routine practice, diagnostics, and administrative
tasks, AI agents free teachers to focus on mentorship, creativity,
emotional intelligence, and higher-order instruction.
The Future of Education Is Collaborative: AI + Humans
Agentic AI isnothere to replace teachers.
The future belongs tocollaborative intelligence—where AI handles personalization and scale, while humans provide empathy, motivation, ethics, and meaning.
AI tools are
reshaping search habits, but Google’s dominance endures as the default
gateway for online information, new research shows.
Generative AI is reshaping how people find information — but it
hasn’t replaced search engines like Google. That’s according to a new
Nielsen Norman Group study:
While users increasingly experiment with ChatGPT, Gemini and AI
Overviews, most still default to old habits: starting with Google.
Why we care. Google is a habit – and habits are hard
to break. That gives Google a built-in edge: even as AI eats into
clicks, Google remains the default starting point for users. That means
organic visibility still matters for brands and businesses. AI is
reshaping the journey, but it won’t erase search overnight.
The big picture. According to the study:
AI overviews = fewer clicks. People notice and
often rely on Google’s AI summaries, reducing the need to visit
websites. Not new, and still bad news for publishers.
AI chat boosts efficiency. Once users tried Gemini or ChatGPT for complex tasks, they found them faster and more useful than traditional search.
Search isn’t gone. Even heavy AI users still
cross-check with Google or visit content pages. No participant relied
solely on AI for all information needs.
Familiarity wins. Just as “Google” became a verb,
some users now casually call ChatGPT “Chat.” Brand familiarity may be
the biggest advantage in AI search.
Bottom line. Generative AI is changing how people research – but it’s an evolution, not a revolution. The biggest barrier to AI adoption isn’t accuracy or UX, it’s human habit.
About the data. Nielsen Norman Group conducted
remote usability testing with nine participants in North America and UK,
representing diverse demographics and levels of AI experience. Sessions
explored how users approached real research tasks with search engines
and AI tools.
From comparison pages
to use case hubs, see which content types are driving AI search
visibility and how to optimize them for LLM discovery.
AI search is evolving fast, but early patterns are emerging.
In our B2B client work, we’ve seen specific types of content consistently surface in LLM-driven results.
These formats – when structured the right way – tend to get picked up, cited, and amplified by models like ChatGPT and Gemini.
This article breaks down five content types gaining notable AI search visibility, what makes them effective, and how to optimize them for LLM discovery:
Our analysis shows that Gemini frequently surfaces “X vs. Y” content in AI Overviews and AI Mode – even when the query doesn’t ask explicitly for the comparison.
What to include
Publish /vs/ pages with pros, cons, pricing, use case match, and schema.
Do this for any competitors that bring in a decent volume of
comparison queries, along with any comparisons that are easily related
to your product or service.
2. Integration docs/open APIs
Our analysis has provided numerous instances of GPTs and Copilot citing SaaS APIs and dev docs in answers.
Example
A ChatGPT prompt for “setting up span metrics for backend services”
cited a docs page from performance monitoring company Sentry in a list
of best practices.
What to include
Maintain clear documentation + changelogs with versioning and schema.
3. Use case hubs
We’ve seen clear indicators that AI Search prefers content that ties features to real business problems.
Build intent-driven use case pages with testimonials and product mapping.
4. Thought leadership on external platforms
LLMs pick up posts from company experts, including founders, SMEs,
and established thought leaders, on outlets like Medium and Dev.to for
strategy-based questions.
Example
What to include
Syndicate posts from a company founder, SME, or brand ambassador
with a unique POV, then include a canonical link back to the business
website.
5. Product docs with schema
Gemini AI Mode lifts from product docs if they’re structured with FAQs, How-to sections, and/or breadcrumb structured data.
Example
What to include
Add FAQPage, HowTo, breadcrumb structured data, and SoftwareApplication schema types to product docs.
3 overarching recommendations
You should never veer from the E-E-A-T principles that have long underpinned traditional SEO. Those same tenets will serve you well for LLM discovery, too.
Beyond them, however, there are a few LLM-specific steps to consider if your goal is to increase AI search visibility.
I’ll break down three key recommendations.
Optimize for multi-modal support
AI search systems are increasingly retrieving and synthesizing
multimodal content (think: images, charts, tables, videos) to better
answer user queries.
Flex your content across multiple media types to provide more useful, scannable, and engaging answers for users.
Specific recommendations:
Ensure images and videos remain crawlable for search and AI bots.
Serve images via clean HTML and avoid lazy-loading with
JavaScript-only rendering, since LLM-based scrapers may not render
JavaScript-heavy elements.
Images should use descriptive alt text that includes topic context.
Add captions to images and videos with an explanation right below or beside the visual.
Use <figure>, <table>, etc., with contextually correct markup to help parse tables, figures, and lists.
Avoid images of tables. Use HTML tables instead for a machine-readable format supporting tokenization and summarization.
Optimize for chunk-level retrieval
AI search engines don’t index or retrieve whole pages.
They break content into passages or “chunks” and retrieve the most relevant segments for synthesis.
Optimize each section like a standalone snippet.
Specific recommendations:
Don’t rely on needing the whole page for context. Each chunk should be independently understandable.
Keep passages semantically tight and self-contained.
Focus on one idea per section: keep each passage tightly focused on a single concept.
Use structured, accessible, and well-formatted HTML with clear subheadings (H2/H3) for every subtopic.
AI search engines synthesize multiple chunks from different sources into a coherent response.
Aim to make your content easy to extract and logically structured to fit into a multi-source answer.
Specific recommendations:
Summarize complex ideas clearly, then expand (A clearly structured “Summary” or “Key takeaways”).
Start answers with a direct, concise sentence.
Favor a factual, non-promotional tone.
Use structured data to help AI models better classify and extract structured answers.
Use natural language Q&A format.
Create B2B content that wins in AI search
An added benefit of these five content types is that they span
multiple intent stages – helping you attract prospects and guide them
through the funnel.
Just as important: make sure your AI search measurement systems are
in place (we use Profound, GA, and qualitative research) so you can
track impact over time.
And stay tuned to reports and industry updates to keep pace with new developments.
Boost your chances of
being cited in AI answers with these four technical SEO tactics that
power visibility in generative search.
When it comes to AI-powered search, visibility isn’t just about ranking – it’s about being included in the answer itself.
That’s why generative engine optimization (GEO)
matters. The same technical SEO practices that help search engines
crawl, index, evaluate, and rank your content also improve your chances
of being pulled into AI-generated responses.
The good news? If your technical SEO is already strong, you’re
halfway there. The rest comes down to knowing which optimizations do
double duty: improving your rankings while boosting your visibility in
generative results.
This article breaks down four technical pillars with the biggest impact on GEO success:
Schema has long been essential for SEO because it removes ambiguity.
Search engines use it to understand content type, identify entities, and
trigger rich results.
For GEO, schema clarity is even more important. LLMs favor structured
data because it reduces ambiguity and speeds extraction. If your
content is marked up clearly, it’s more likely to be selected and cited.
Priority schema types for GEO
Focus on evergreen types that improve visibility:
FAQPage: Clearly labeled Q&A helps LLMs match user queries and surface your answers.
HowTo: Structured step-by-step processes are easy for AI to extract.
Product / Service: Defines pricing, availability, and specifications for accurate inclusion.
Generative engines pull from billions of pages. If yours is slow or
unstable, they can skip it in favor of faster, more reliable sources.
Quick performance wins
Compress images; use WebP or AVIF; enable lazy loading.
Eliminate render-blocking CSS and JavaScript.
Target a server response time (TTFB) under 200ms.
Use a CDN to reduce latency.
Bottom line: Speed could be a tiebreaker between
equally relevant sources. Faster pages have higher odds of inclusion in
AI-generated answers – and they convert better once users click through.
3. Content structure: Making information machine-readable
LLMs rely on clarity. The easier it is for machines to parse and
organize your content, the more likely it is to appear in AI-generated
results.
JavaScript rendering: Don’t hide core content behind heavy client-side rendering. Use server-side rendering for anything essential.
Bottom line: If search or generative engines can’t
crawl, verify freshness, or trust your site, your content won’t be
considered – no matter how authoritative it is.
Building for search and AI success
The technical elements that drive GEO success aren’t new. They build on SEO fundamentals you already know:
Schema.
Performance.
Structure.
Infrastructure.
But in the AI era, these aren’t just best practices – they’re the deciding factors between being featured and being forgotten.
Getting this right will preserve your search visibility and put your content at the center of AI-driven answers
Discover how Google's
Knowledge Graph works, why it matters for SEO, and how to optimize your
content and entities for enhanced search visibility and authority.
Google’s Knowledge Graph is a powerful database of real-world
relationships between keywords and the things—people, objects, concepts,
etc.—that those search terms represent.
Google’s Knowledge Graph is essentially built on the principles of
ontology. An ontology provides the formal framework for defining
entities, their attributes, and the relationships between them, ensuring
that knowledge is structured in a consistent and machine-readable way.
The Knowledge Graph applies this framework at massive scale, linking
billions of entities—like people, places, and organizations—through
clearly defined relationships.
The Knowledge Graph influences search results in increasingly dynamic
ways. As its influence continues to grow, it’s important to understand:
Use cases for how Knowledge Graph data shows up in the SERPs
Where Google gets the data to power its Knowledge Graph
The best ways to prepare your content so it appears in Knowledge Graph–powered search features
Keep reading to learn about the Google Knowledge Graph, and get
practical steps on how to make sure your brand and content are
represented in this critical component of Google’s search algorithm.
Why the Knowledge Graph is central to modern SEO
Launched in 2012, the Knowledge Graph introduced a radical new approach to delivering search results.
Early search engines, including Google, worked by matching keywords
to website text, then sharing a list of matched websites in their search
engine results pages (SERPs).
With the introduction of the Knowledge Graph, Google algorithms are
better able to identify the people, things, and ideas that searchers
really want to find—rather than simply matching search queries to
content.
Or as Google has put it, the Knowledge Graph helps searchers find “things, not strings.”
Over time, many of Google’s algorithm updates and expanded SERP features have relied on the Knowledge Graph.
Some of those algorithm updates and SERP features include:
Semantic search: Google’s ability to understand the meaning and context of searchers’ queries
Knowledge panels: SERP features that provides short-form facts and information about the searched topic
Brand visibility and discovery: How brands show up in searches, including how they relate to sub-brands, related brands, industries, products, and more
As Google continues to construct more complex and context-relevant search results, the ability of the Knowledge Graph to match searches to the things they represent becomes more central to SEO.
Do LLMs use Google’s Knowledge Graph?
Large language models (LLMs) like ChatGPT do not have direct access
to Google’s proprietary Knowledge Graph during training. Instead, they
are trained on large amounts of text data, which allows them to learn
patterns of language and meaning.
Researchers often combine the two through approaches like
retrieval-augmented generation (RAG), where a model queries a knowledge
graph or database at inference time to reduce hallucinations. In other
cases, publicly available graphs like Wikidata can be used to fine-tune
models or inject structured knowledge. With all that in mind, here’s a
closer look at what the Knowledge Graph is and how it works.
With that in mind, here’s a closer look at what the Knowledge Graph is and how it works.
The Google Knowledge Graph is a database of structured data that
describes the relationships between different people, places, things,
and concepts—collectively known as “entities.”
With the Knowledge Graph, Google has been able to transform its
search algorithm from a traditional keyword-match approach to a more
logical, entity-based system.
It does this by understanding not only the definitions of words and
phrases in its index, but by leveraging its machine learning algorithms
to map how those entities and their relationships connect to each other.
For example, a user who searches for “seal” might want to know about any of the following things:
An identifying emblem—or the object used to create such an emblem
A member of an elite US Navy unit
A device or material that prevents leaks
The world-famous British recording artist
A semi-aquatic mammal
With the traditional keyword-based system, Google might have shown mixed search listings for pages related to all of the topics that “seal” could represent.
Using the Knowledge Graph, however, Google better understands what entity searchers want to know more about.
By matching searches to entities, Google understands that most people
who use the keyword “seal” are looking for the musical artist, rather
than the other entities the term “seal” might represent.
This connection becomes more evident when using minor variations in keyword query language.
For example, when searching for “seals,” people are more likely
looking for information about semi-aquatic mammals than other things
that “seals” could mean.
The Google Knowledge Graph can make these connections because it
understands the semantic relationships between words and the real-world
entities they represent.
To understand how it does that, let’s dive into an overview of how knowledge graphs work in general.
A quick and dirty knowledge graph primer
Generally speaking, a knowledge graph is a way to represent paired data.
For the purposes of this guide, the paired data being represented is a semantic relationship between two entities.
A semantic relationship refers to how words connect to their meanings. There are three basic parts to such a relationship:
Subject (noun): The first entity
Predicate (verb): The relationship between the entities
Object (noun): The second entity
Semantic relationships are often stated in simple sentences, such as:
A seal has fur.
The semantic relationship here can be broken down as follows:
Subject: a seal
Predicate: has
Object: fur
This relationship is often visualized using circles for the entities
(“seal” and “fur”) with an arrow showing the predicate relationship
(“has”).
This type of data model is known as a graph—more specifically a directed graph (or digraph) because it shows the direction of the relationship.
Adding more entities (circles) and relationships (arrows) to the graph shows how different entities interact.
Knowledge graphs are used to model data across many different fields,
including linguistics, computing, various sciences, and mathematics. In
fact, the general concept of a knowledge graph has been around for a long time.
With that in mind, let’s look at how Google’s Knowledge Graph impacts SEO.
For every search the Google algorithm processes, one of the big decisions it has to make is which SERP features to display.
Part of how it chooses those features is by using its Knowledge Graph
to identify what entity (or entities) the user is searching for, as
well as what other entities are closely related to it.
For example, in the knowledge panel for the query “seals,” Google provides a number of facts about seals.
The ever-growing complexity and refinement of entities and
relationships in the Knowledge Graph has allowed Google to provide more
granular and better organized information in its search features.
Here are some examples for the most common search features that
illustrate how Google leverages this entity-based information in its
results.
Knowledge panels
Knowledge panels contain brief factual information about the main
subject (entity) of any given search. They typically appear at the top
of mobile search results or on the right of search listings on desktop
searches.
The knowledge panel will look similar for many types of searches, though the specific types of factual information may change.
For example, the knowledge panel for recording artist Seal includes:
How most people know him (“Singer-songwriter and record producer”)
A short informational paragraph from Wikipedia
Biographical details
Relationships to other notable people and groups
Compare that to the knowledge panel for the US Navy SEALs, which
shows facts related to the force’s founding, organizational structure,
and associated names.
The information included in knowledge panels can come from a few general sources:
Open data from public sources
Private data licensed by Google
Websites Google has crawled that demonstrate a high level of experience, expertise, authoritativeness, and trust (E-E-A-T)
Directly from the individual, company, or organization—if the panel is claimed by the subject of the knowledge panel
More on how Google sources this data is provided in the section on How Google builds the Knowledge Graph below.
People also searched for (PASF)
Google likes to give users options to refine searches or dig deeper
into more specific topics. One of the ways it does this is through a
“People also searched for” feature, sometimes shortened to “PASF.”
In fact, many searches may include two sections that are labeled “People also search for”:
Below the knowledge panel (as shown above)
At the bottom of the SERP
The PASF feature below the knowledge includes entities that are closely related to the main entity of the search.
For example, when the main search is “navy seals,” the PASF feature includes related military groups.
However, the PASF feature at the bottom of the page tends to be more
keyword focused. This PASF is designed to help refine the current search
rather than search for different or related entities.
Finally, it’s important to distinguish between “People also search for” (PASF) and “People Also Ask” (PAA):
PASF helps searchers find the right entity or dig deeper into related entities.
PAA lets searchers learn more about specific topics
related to the current search entity, such as factual information,
how-tos, or similar details.
In other words, while both features are likely informed by the
Knowledge Graph, PASF has a much more direct link to the entity-based
relationships contained in Google’s knowledge base.
Dig deeper:How to maximize visibility in Google’s blended SERPs
Related entities
In addition to some of the SERP features discussed above, Google may
use the Knowledge Graph to provide context-specific features with
information about related entities.
For example, when searching for the musician Seal, a list of related songs appears near the top of the search results:
There are many different types of entities in the Knowledge Graph. A few such entity types listed in Google’s technical documentation include:
Book
Event
Movie
Music Recording
Organization
Periodical
Person
Place
Sports Team
TV Episode
Video Game
More complex versions of some entities are also included, such as
Book Series (a sequence of Book entities) or Music Album (a collection
of Music Recordings).
Likewise, more specific versions of some entities are included, such
as Local Business and Government Organization—each of which is a type of
Organization.
Not every type of entity listed here will necessarily show up in the
SERPs. The ones that do will be based on the specific context of the
search.
For example, musical artists that are on tour may have a calendar of
upcoming concerts listed within the search results like the example
below.
Both featured snippets and AI Overviews provide searchers with the information they’re most likely to want for the current search.
However, while they serve a similar purpose, these two SERP features function a little differently:
Featured snippets quote text from a single source that answers or gives information about the query.
AI Overviews generate a summary of information
pulled from multiple sources. They also tend to be longer and cover the
topic more broadly than featured snippets.
AI Overviews and featured snippets typically appear at or near the
top of search results, and they are larger than standard search
listings.
For example, an AI Overview appears at the top of the search “what do
seals eat” with information about seal diets and links to multiple
sources.
The text for AI Overviews is generated from Google Gemini’s
understanding of how the entities are related—in this case, which
entity is the eater (seals) and which entities are being eaten (fish,
squid, etc.).
Both featured snippets and AI Overviews can also appear in People Also Ask (PAA) and Things to Know.
In the PAA for the same query, the first question provides a featured snippet from a page about seals at the International Fund for Animal Welfare (IFAW).
At the top of the answer, Google identifies the link between the
entities of “seal” and “krill” as understood by the Knowledge Graph.
Compare the highlighted text in the search result above and the
screenshot of the source page below. The snippet is pulled verbatim from
the website, rather than being a generative text overview.
Whether featured snippets remain as a Google SERP staple or go away altogether, both they and their newer AI counterparts will continue to make use of Knowledge Graph in some form.
When it comes to product information, Google doesn’t use the
Knowledge Graph. Instead, it utilizes a very similar tool known as the Shopping Graph.
That’s because Google uses data submitted through its Merchant Center
and Manufacturer Center to populate the Shopping Graph. This type of
data is subject to change more frequently, and allowing merchants and
manufacturers to update this information means it’s more likely to stay
fresh and accurate.
The Shopping Graph powers transactional search results in much the
same way that the Knowledge Graph powers searches with informational and
other search intents.
For example, when searching for a “wax seal,” Google will display currently available products, including stamps and wax.
Although they are distinct databases, understanding the entity-based
approach for both Knowledge Graph and Shopping Graph can help you
identify how Google enhances its search results using information in
these databases.
Here are some of the ways Google collects and extracts information to power its knowledge base.
Open data and community projects
One of the most substantial ways that Google gathers Knowledge Graph
information is through the collection of data from various open-data
sources and community projects like the publicly edited online
encyclopedia Wikipedia and the related community project, Wikidata.
Wikipedia is the source for many of Google’s features, especially those that pull summary text snippets.
Wikidata, which provides structured-data knowledge base that supports Wikipedia, Wiktionary, and other Wikimedia projects, is also a frequent source for Google Knowledge Graph.
However, Wikidata is rarely mentioned as a primary source in results.
But you can see how some of the knowledge panel facts are being pulled
from the Wikidata page for that entity, such as in the Navy SEALs
example below.
Before Wikidata, Google had used Freebase—its own community-edited
database—as a Knowledge Graph source. Freebase was shut down in 2016,
and much of its data was transferred to Wikidata.
Government data
Google sources government data to provide specific results.
One such example is The World Factbook published annually by the US Central Intelligence Agency (CIA). The Factbook
provides information about countries, geographical areas,
organizations, and political alliances around the globe. It pulls its
information from various other US agencies, as well as private sources.
The information in the Factbook is considered public domain.
This means facts and images from it may also be included in Google’s
knowledge graph from other sources, such as this image of a seal pup.
Another example is Google’s structured public data project, Data Commons. This project collects publicly available data from public sources around the world, many of which are governmental agencies or multi-governmental organizations like the United Nations and the European Union.
Although Data Commons has its own separate knowledge graph, Google has acknowledged that some Data Commons data points are used in Google Search as well.
Certain searches can also pull information from government sources related to that topic.
For example, Google pulls weather and air quality data
from various national and international meteorological agencies. This
data is combined and used to predict weather in Google’s “nowcast”
feature.
Private licensed data
Google also licenses data from various private sources to use in its search results.
Similarly, Google works directly with sports leagues and teams, as
well as stats aggregators, to get real-time and historical statistics.
For example, Stats Perform says that Google uses its Opta platform to provide real-time sports scores and statistics.
Because Google does not disclose much detail about private data
sources, it’s hard to know exactly what information Google pulls from
those sources versus its own Knowledge Graph.
Even so, this private data is definitely linked to entities within
the Knowledge Graph, and it offers opportunities to find out more about
both them and related entities.
While Google Books is a separate project than Google Search, they are
closely aligned. Some Google Books data is included in the Knowledge
Graph, and snippets and data from books are used directly in SERP
features like knowledge panels.
The same exact highlighted information in the above knowledge panel
can be seen in the Google Books page for the book, as shown below.
Structured data from websites
Google recommends using structured data to help its indexing bots understand the layout and content of webpages.
Structured data uses special markup to describe the paired
relationships modeled by knowledge graphs. In particular, Google and
other search engines use the structured data framework from Schema.org to better understand semantic information on the web.
Webpages with properly marked up schema can have snippets and other
information more readily appear in search features like People Also Ask.
The information outlined in the AI Overview above comes from the
source which uses schema markup to clarify the relationship to the
searched entity.
More information is provided below on how to optimize pages for the Knowledge Graph using schema and other methods.
User feedback and claimed knowledge panels
Users can also influence the information contained in the Knowledge Graph by providing feedback.
When viewing a knowledge panel, you may see the “Feedback” link at
the bottom right corner, as highlighted in the following screenshot.
Clicking the link will highlight the various facts and related
entities included in the SERPs. Each of these has a flag next to it,
allowing the user to provide feedback about that specific item.
After selecting one of these flags, the user can classify their
feedback and provide details with supporting evidence, statements, and
links.
Users who search for themselves or for someone they represent may be
able to claim the knowledge panel after following a verification
process. This can provide a greater ability to update (or remove) items
and information from appearing in search results.
Despite all of the efforts out there to find better ways to create
and understand structured data, there’s still a large portion of the web
that lacks structure.
Unstructured data is information that might be relatively easy for a
human to understand, but which is difficult for computers to know.
For example, the following paragraph is easily understood by people:
Seals spend a lot of time swimming, barking at each other, and
hunting fish and other seafood. They have fins instead of feet, and
their fur helps them stay warm.
In order for a machine to understand the above paragraph, it has to
break down each piece of information in that paragraph in a way it can
understand.
Such a breakdown might look something like this:
Entity (subject): Seals:
Action: Spend time:
Sub-action: Swimming
Sub-action: Barking:
Object: At each other
Sub-action: Hunting:
Entity (object): Fish
Object (object): Other seafood
Property: Have:
Entity (object): Fins:
Clarification: Instead of feet
Entity (object): Fur:
Purpose: Stay warm
Figuring out how to translate human language to
machine-understandable models has been problematic since—well, since
even before search engines like Google existed.
With NLP, Google has gotten increasingly better in recent years at
being able to move beyond simplistic, keyword-based queries to
understanding both queries and indexed pages in more semantically
meaningful ways.
Bidirectional Encoder Representations from Transformers (BERT): Helps understand how different word combinations can change the meaning of a keyword or sentence
Multitask Unified Model (MUM): Improves featured snippet callouts
Passage ranking: Identifies the meaning and intent of passages within a page of context to provide more relevant results
RankBrain: Associates words with specific concepts
Google applies these NLP algorithms as part of its search ranking systems, not
its indexing systems. That means it’s analyzing the language of both
the submitted query and indexed pages to find useful content for users.
Actual use of unstructured data in the Google Knowledge Graph appears to be limited.
Rather, Google extracts information from the Knowledge Graph and its
indexed webpages based on its NLP understanding of keyword intent.
However, as Google continues to research
topics like NLP and its processing of unstructured data gets better and
better, it’s more likely that data pulled from unstructured content
will find its way into its Knowledge Graph.
Why SEOs should care about the Knowledge Graph
As the Knowledge Graph has grown in size, it has also grown in
importance. Over time, new and updated features have leaned further into
entity-based information meant to keep people on Google rather than
directing users to other web pages.
Older-style search listings are still a big part of the SERPs.
However, for some searches, the top listings are getting pushed further
and further down the page in favor of Knowledge Graph–powered features
like knowledge panels, AI Overviews, and “Things to Know” sections.
Being informed about how the Knowledge Graph can impact the SERPs can
help SEOs better understand how to counteract—or even take advantage
of—the growing prominence of Knowledge Graph data.
Brand visibility
There are several levels of brand visibility to consider in search results:
Branded searches: Queries that use the brand name or a commonly used brand nickname (such as “McD” for “McDonald’s”)
Related entity searches: Queries with closely related people, products, or phrases (such as trademarks and mottoes)
Related topic searches: Queries about things and
ideas that affect the brand’s industry or area of influence (such as
health impacts, social concerns, or corporate governance)
Because Google understands the entities behind these searches, it can provide results that connect these entities together.
For example, imagine we are working with the musical artist Seal:
“Seal” is a branded search term for the artist,
because Google understands the main entity behind that search to be the
artist. Other branded terms might include keywords like “Seal songs” or
“Seal albums.”
“Batman Forever” is a related entity search in reference to “Seal,” because Google understands the relationship between the Batman Forever soundtrack and Seal’s song on that soundtrack, “Kiss from a Rose.”
“Songs about flowers” is a related topic search because Google understands Seal’s song “Kiss from a Rose” to be a song about flowers.
Understanding how Google links entities in the Knowledge Graph can
help to ensure that the entities and topics you want are connected to
your brand. It can also uncover connections between entities you don’t want associated with your brand.
E-E-A-T: Experience, Expertise, Authoritativeness, and Trust
Google has long said that it gives preferential ranking to sites that
follow E-E-A-T principles. This is also true of information that it
includes into the Knowledge Graph.
In fact, some of Google’s updates to the Knowledge Graph appear to focus on including more entities with high E-E-A-T.
This makes sense considering that Google is always looking for more and
better ways to validate the usefulness and accuracy of the information
it indexes across the web.
If Google is pulling data from your website into SERP features
powered by the Knowledge Graph, it can be an indication that your site
is doing well when it comes to signalling E-E-A-T.
However, if Google isn’t pulling data from your site, it could be a problem with E-E-A-T.
Here are some ways you can try to improve E-E-A-T:
Experience: Demonstrate your experience, as well as
the experience of others who publish on your site, to show that you can
back up your content with knowledge and skill.
Expertise: Highlight your expertise by building content that’s relevant to your industry and closely connected topics.
Authoritativeness: Display testimonials from others
in your field, and showcase any awards or accolades you have that
indicate how others perceive you.
Trustworthiness: This builds on the other three
elements in this list, and as you improve your experience, expertise,
and authoritativeness, your site will likely be seen as more
trustworthy.
Attempting to improve E-E-A-T is always a good choice because it will improve your chances of getting into the Knowledge Graph and ranking higher in search results.
As algorithms get more complex and AI becomes more prevalent, more dynamic types of search are emerging:
Voice search has been available for a while, and it has led to the use of longer-tail, sentence-length search queries.
Multimodal search is a burgeoning field that mixes voice, image, and audiovisual into a whole new search experience.
With better AI recognition and Natural Language Processing, Google
and others are getting better at identifying the entities and
relationships in these types of non-text-based searches. It’s becoming
less about the specific wording a user utilizes, and more about their
intent for the search.
As Google leans into AI-driven multimodal search
with products like Google Lens, the need to identify and understand the
entities and relationships behind these complex searches will
increase.
Although organic listings are pushed further down the page, AI
Overviews tend to have more sources for the information in their
generated summaries. These summaries link to prominent pages containing
relevant information.
Because there are more sources in AI Overviews, this gives content
publishers more opportunities for their pages to appear higher up on the
SERPs page—even if they weren’t in the first organic position.
This means that as the Knowledge Graph and AI continue to evolve, it
may be better to have content that provides specific information and
facts about entities rather than content that ranks at the top of the
organic search results.
AI and future search environments
One of the ways that Google is enhancing searches with AI is through what it calls “query fan-out.”
Essentially, this means sending out a bunch of related sub-queries
that can be pulled into a single, relevant AI-generated response.
As AI continues to gain greater prominence, finding ways to have your
site appear as a source in AI-generated information is going to become
more important.
One of the ways to do that is by optimizing for the Knowledge Graph. Read on to see how to do that.
Once you have those bases covered, you’re ready to start optimizing
your site for the Knowledge Graph. Let’s take a look at how to do that.
1. Add and test relevant schema
It’s important to mark up your website with the appropriate schema so
that Google can clearly and easily understand the information on your
site. (See “Structured data from websites above for more information.)
Google Search Central dedicates a section of its documentation to helping webmasters and SEO professionals understand how structured data works in Google Search. Familiarize yourself with it, and keep it handy for reference when implementing schema on your site.
Following are some of the most important schema properties to implement.
Organization:
This schema provides details about the main organization behind the
website, including logo, description, and contact information that can
appear in SERP features. Google recommends adding this only to the site’s home page or a page describing the organization, such as an “About Us” page.
LocalBusiness:
This schema is meant to be used if the goal is to show up in Google
search as a local business. The LocalBusiness schema would be used for
this instead of the Organization schema mentioned above. This allows for
additional information like hours of operation and can help you show up
in local search results.
ProfilePage:
This schema gives information about people who contribute to the
website, such as blog authors or contributors. It’s not meant for other
types of profiles, such as company leaders, board members, etc. (unless
they contribute directly to the website content).
FAQPage:
For pages that list Frequently Asked Questions (FAQs), this schema
helps answers to those questions appear in search result features like
People Also Ask. For sites that feature multiple FAQs across their
pages, such as answers to questions on individual product pages, it’s
important to make sure that those FAQs are unique, or Google may
consider it duplicate content.
QAPage:
This schema is for community forums that allow multiple people to
respond to a question with comments or threaded conversation. A site
would use this schema to mark up the question and multiple answers.
Of course, the above are the bare minimum main schema elements you
should be including on your page to optimize it for the Knowledge Graph.
Having a page (or multiple pages) on Wikidata or Wikipedia greatly
increases the likelihood of getting information about that topic into
the Knowledge Graph.
Unfortunately, adding to and updating these community projects can be
incredibly tricky given their renowned hostility toward people editing
pages about themselves and related entities.
And doing it wrong can lead to bad press, erosion of brand trust, and a strong public backlash.
For example, The North Face received strong public criticism when it
tried to edit its own photos on Wikipedia. And now, the controversy
itself is listed on the Wikipedia page about the company.
Before attempting to add or modify content on Wikidata or Wikipedia, consider the following questions:
Does the person, organization, or information you want to add meet notability guidelines?
For example, your cousin Vinny’s local bar gigs might not be generally
notable, but if his band gets a big break and goes on tour, he might be
able to get a Wikipedia page.
Can you disclose any and all conflicts of interest?
Some conflicts of interest include making edits on behalf of yourself, a
relative, a close friend, an employer, and any organizations or
affiliations you are associated with now or in the past.
Are you familiar with the many conduct, content, and other policies that govern the project?
Wikipedia has its own complex ecosystem of editorial rules and
regulations, such as the need for a neutral point of view and being able
to verify facts and claims.
If you answer yes to all of the questions above, you may be able to make a request to create or update a page about yourself, your brand, or an entity or topic related to you or your client.
3. Set up your Google Business Profile
As mentioned above, adding LocalBusiness schema to your local
business website is essential. But if you want to really stand out in
local search, you should also create and optimize your Google Business Profile (GBP).
GBP gives businesses finer control over how their business, services,
and other commercial information appears in search results.
For example, you can:
Update important information like name, address, and phone number (NAP), multiple locations, and business hours
Categorize, describe, and provide updates about your business
Add images and videos
Provide product information
Enable reviews and customer messaging
Much of this information is included in knowledge panels and other
SERP features. It may also be added to Google’s Knowledge Graph.
Once your GBP is up and running, you can also link your profile to analytics tools like Semrush Local to get additional information about keyword rankings and competitors.
4. Verify government data sources
Sometimes information pulled from government sources can be inaccurate or associated with the wrong entity.
If you find incorrect data from a public source, it might be worth
tracking down the right government agency or department to ensure that
it gets corrected.
Some types of public data to check include:
Business incorporation documentation on file with your city, state/province, or national government
Financial reports and business filings, such as those provided to
the Internal Revenue Service (IRS) or Securities and Exchange Commission
(SEC), or similar agencies in countries other than the US
Patent, trademark, copyright, and other intellectual property registrations with regional or national authorities
Safety, hazard, and recall information submitted to local and national consumer protection watchdogs
Depending on your industry, you may have other public reporting or
information requirements that can lead to information about your brand
appearing in public data sources.
5. Create social media and community profiles
The more you can control or manage social media pages related to your
brand, the more opportunities you have to influence the Knowledge
Graph.
That’s because Google keeps track of official social media profiles
for some people and brands in the Knowledge Graph. It then displays
icons with links to those related profiles in the “Profiles” section of
the knowledge panel.
Only selected social media sites can appear in knowledge panels. These include:
Facebook
Instagram
LinkedIn
Pinterest
Snapchat
TikTok
X (Twitter)
YouTube
Google has added to this list over time, and it’s possible additional
social media profiles could be added to knowledge panels in the future.
In addition, Google may include recent posts by related profiles from some of these social media sites in search results.
You can use the “sameAs” property in Organization, LocalBusiness, and
ProfilePage schema on your website to associate your business or
personal brand with your social media profiles. You can also add links
to selected social media profiles in your Google Business Profile.
Information may also be pulled into the Knowledge Graph from certain business profiles on reputable sites, including:
These are only a few of the most popular sites that publish business
profiles. You may be able to find other sites more relevant to your
brand and industry, which Google could then use to pull information into
the Knowledge Graph and share in SERPs. (See “Structured data from websites” above.)
Business sites often have their own criteria for claiming and managing profiles, and some of them may require a subscription.
6. Ask third-party websites to remove or correct outdated information
While Google prefers fresh content, it’s still possible for content
that’s old or no longer relevant to appear in search results.
If you find stale information about you or your company in search
results, it may be worth reaching out to the owners of the originating
websites to see if they will update or remove it.
You can also publish updates on your own site to help prompt other
websites to write new, fresh content with the updated information. While
you’re at it, consider removing or refreshing your own old content, as well, to ensure that Google is pulling the most current and relevant information from a source you control.
7. Submit feedback about inaccurate Google Search results
Every Google result comes with the ability to submit feedback on the search.
In the “seals” search results screenshot below, there are two “Feedback” links:
One at the bottom of the “People Also Ask” feature
Another in the knowledge panel section
Once you click the feedback link, you can select a general category for the feedback you want to give.
On the next screen, you can further categorize the feedback and provide additional details about the results.
Submitting feedback about a search result isn’t guaranteed to get any
results. However, if Google agrees with the feedback, it may update the
data in the Knowledge Graph—or even remove it from the search listings
altogether.
8. Ensure consistency across your properties
Whether it’s your own website, Google Business Profile, social media,
or third-party sites, there’s one way to make sure your information
appears accurately in Google’s Knowledge Graph:
Consistency.
One of the reasons Knowledge Graph data may be missing is that
conflicting information is preventing Google from knowing what to put in
the search results.
The more you can do to check, double check, and triple check your
information across your own site and others, the more likely you’ll be
able to optimize Knowledge Graph data.
Is there a way to track entity presence and relationships?
Every entity in the Google Knowledge Graph has a unique identifier
known by the incredibly creative name “Google Knowledge Graph ID.” It’s
sometimes shortened to KGMID.
Wait. Where did the “M” come from?
Great question. It comes from the fact that there are two versions of
the KGMID, one that begins with “/m/” and another that starts with
“/g/”:
/m/ signifies an entity that originated in the old Freebase database
/g/ signifies an entity added to the Knowledge Graph after Freebase was discontinued in 2016
It’s possible to inspect or view the HTML source of a Google search
results page to find the KGMID for a given entity. But there’s an easier
way to get it.
Click on the “three dots” at the top of a knowledge panel for the entity.
Click the “Share” link to copy the shortened URL for the search.
Paste the shortened URL into your browser’s location bar and click “Enter.”
The shortened URL will redirect to a full Google search URL that looks something like the following:
The URL parameter starting with kgmid= contains the KGMID, which for seals is /m/0gd3n.
Tools for tracking Knowledge Graph entities
It’s okay to go through the manual process above once to fully
understand the mechanics of it. However, it would be tedious to find and
track entities by KGMID that way all of the time.
Fortunately, there are a few tools that can help.
Kalicube boasts
a huge database of tracked entities, brands, and knowledge panels. It
has various levels of service that include consulting on brand
visibility, gaining knowledge panels, rebranding, and competitive
research.
InLinks focuses on identifying entity-based gaps, building and marking up content to fill those gaps, and creating contextual interlinking to reinforce entity relationships throughout a website.
WordLift
provides a number of services focused on SEO, engagement, and
conversion. Its Visibility solution, in particular, helps identify
topics and terminology for creating and marking up entity-based content.
Google Knowledge Graph Search API
is part of Google Cloud, and it can be used on a limited-quota basis
for free. Searches using the API return limited information about the
entities returned by the query, including name, description, image, and
the KGMID.
While there’s no single tool that can provide a magical way to
dominate Google’s Knowledge Graph, tools like these can help to better
understand where you may already be doing well in providing the right
entity-based signals to Google—and where you can improve.
Pro tip: As you build or refresh content on your site, rather than focusing on creating topic clusters
around traditional keyword research, consider creating content clusters
that focus on entities as topics. You can also find new ways to fill
out your content strategy through entity-based competitive analysis.
How to correct or claim a Knowledge Panel
Some knowledge panels can be claimed by the individual or organization they represent.
Never, ever try to claim a knowledge panel for an entity you are not authorized to represent. It’s unethical, and it can endanger your ability to make legitimate claims on knowledge panels in the future.
You can also request that personal information like age, date of birth, and relationship status be removed.
Suggestions submitted by the knowledge panel claimant are more likely
to be accepted by Google. However, Google will still have final say in
what information gets added to the Knowledge Graph and appears on
knowledge panels.
To check if you can claim your knowledge panel:
Log into your Google account.
Search for yourself or your organization.
In the knowledge panel, click on the “three dots” near the top.
In the dropdown menu that appears, click on “Claim this knowledge panel.”
You may need to go through additional verification procedures to complete the process and fully claim the knowledge panel.
Once you’re verified, you can suggest changes by clicking on the “Suggest edits” in your knowledge panel.
Although edits are not always accepted by Google, claiming a
knowledge panel generally gives you a greater ability to influence the
information that appears in the panel. Google also typically responds
faster (within a few days) for edits suggested by verified individuals
as opposed to anonymous users submitting feedback.Plan your content for the Knowledge Graph
The Knowledge Graph is Google’s way of understanding entities and
their relationships. As AI and other technologies build on that
understanding, it will only become more important.
In other words, relevant, entity-based content is the future and present of SEO.
Now that you’re familiar with how the Knowledge Graph works—and why it’s so important—start working on a content marketing plan that will connect your brand to the right people, products, and ideas.