Skip to content
Ideas

How to Train GenAI to Work as Your Personal Research Assistant

GenAI pulls from limited resources—and only knows as much as you tell it. These tips will help you improve its effectiveness for research purposes.

Words by Joey Jakob, Visuals by Thumy Phan

Generative AI is an inescapable fact of life now—but are we making the best use of it? And do we understand how to use GenAI, especially as researchers?

In our personal time, we might have GenAI write us a poem or concoct a cocktail recipe from the listed ingredients currently at our disposal. These use cases are cute and fun, but don’t inherently train GenAI as our professional assistants—which is what we should be doing as researchers.

I moved to user experience research three years ago after a decade and a half of professional social research experience. Much has changed even in this short time, specifically with the ubiquitous availability of GenAI.

With my background in sociology—including how information is communicated, shared, and understood—and my current work in UXR, I've put together a simple recipe in the form of dos and forget-me-nots for using GenAI to begin any inquiry. These high-level examples exhibit GenAI’s basic functionality, limitations, and biases.

Jump to…

"Let’s remember again that GenAI can only tell you what it knows. And so you must ask it how it knows what it knows."

Joey Jakob

Training GenAI through collaboration

We feed “prompts” that enable GenAI to effectively listen to what we say, so it can provide the info we ask for. GenAI is our collaborator, our professional confidant, our partner in crime.

A longer explanation includes knowing it’s a misnomer to call what we do “training”—GenAI shows up to us, the end user, pre-trained by the product’s developers. With each new release, the GenAI receives updates, not just in information but also in its backend capabilities. As of writing this, ChatGPT now has its own private, proprietary browser, similar to how Chrome or Firefox functions but OpenAI’s (ChatGPT’s developer) very own.

When trained with good data, GenAI can deliver valuable suggestions, interpretations, and even potential directions for you to pursue as the project leader. We assume all data fed to GenAI is “good,” as in factual, trustworthy, and not discriminatory, but the sociologist in me knows this is impossible.

Back to top

Context is everything

Let’s start with what GenAI does—and ChatGPT specifically—at the most basic level. GPT stands for “generative pre-trained,” which is a short way to say that it only knows what it knows from the data provided to it. And, from this data, it is designed to generate human-like text. When we say it’s “generative,” we’re referring to its predictive functions: what could come next based on what has come before.

What could come next is important here: GenAI provides answers to questions asked of it, but these answers can only be based on the data GenAI has access to. Context becomes fundamentally important.

Think of all the things you’re aware of in a given research scenario: you know your participants and their sentiments, you know your stakeholders and their needs, and you know what business goals you’re looking to meet.

GenAI isn’t aware of what your stakeholders said in yesterday’s meeting (unless you drop in meeting notes). GenAI also isn’t aware of company metrics or last year’s performance (again, unless you share this intel with it). And GenAI doesn’t know that your participants answered a survey last year (unless… well, you already know).

Ultimately, since GenAI is your assistant, you must lead it by providing valuable information and contexts, which brings us to prompting.

Back to top

Prompting 101

Our interactions with GenAI tailor it for our personal use. What we say to GenAI—known as “prompts”—feeds it important context cues describing what we want to know. Using “natural language processing,” GenAI is trained to find crucial keywords and phrases within a prompt or question.

These techniques work to discern which elements within a prompt will most likely lead to relevant and accurate information. Because it’s “generative,” GenAI works best in conversation. Again, all roads lead back to collaboration.

Like ChatGPT, GenAI comes pre-trained. Our job as users is to talk to it, simply and straightforwardly. For example, I asked ChatGPT to explain what it knows about desk research.

Always begin any session with the most basic of prompts, asking it what it knows about your topic. This way you know what it already understands, at a high level. From here, you might decide too much information is provided, and ask for refinement. So tell it exactly how you’d like the information to be structured. In this case, I asked for a single sentence:

From here I wondered how ChatGPT might help explain the value of desk research to a wide audience of stakeholders across many departments.

Once GenAI has generated some intel for you, it’s time to consider if it’s telling you the truth. It’s been known to lie—or not verify the information it provides—and it’s a tremendously fun Google search to dive into.

Back to top

Fact checking

ChatGPT will be the first to tell you to fact-check the information it shares.

Extra important: ChatGPT can pull a fast one on ya, and later sometimes removes information it previously provided. So click, copy, and screengrab what it originally shares with you!

The image below shows the two sources ChatGPT felt relevant to support its case for the value of desk research. The first source is a blog post from a SaaS tool, Priceagent, that discusses strategies for new product development. The second source is an academic paper on the challenges of introducing new products.

These sources are relevant to understanding what desk research is, what options are available, and the difficulties in bringing new products to market. But the sources also make assumptions that might be irrelevant to our needs.

Also, note that I influenced ChatGPT by asking how to increase the value of desk research for weary stakeholders—so it assumes this is an ongoing issue to be addressed unless I tell it otherwise. GenAI is like a sponge or observant child: whatever you spill in its presence will shape it.

Here, ChatGPT’s basic understanding of what desk research is, and the value of desk research, is pretty good. The explanation gives broad examples that make it easy to understand the overall concept. But how did it hone in on an example of a company considering launching a new product?

In asking where it pulled the example from, it tells me that it’s not technically drawn from a specific case, but is more of a typical scenario where desk research could—could—be applied.

Remember, GenAI seeks to be predictive. And prediction for this tech is entirely based on what it has previously known. So I next ask, “Where did you get these two sources from?” ChatGPT shares that it's referring to “common knowledge” here.

Above, it told me that part of the intel it shared was common knowledge, so I asked it: How do you know the information you provided is common knowledge? And why should this be trusted as such?

A problem here might be that “common knowledge” still takes many things for granted because it inherently assumes a baseline of shared information. ChatGPT, like all research tools, has inherent biases programmed into it, and these stem from the people who choose what information is important—this worthwhile data informs and instructs how it works.

For instance, ChatGPT assumed first and foremost that the value of desk research is to direct future product offerings, with a focus on consumer desires and market trends as the reason for conducting this research in the first place. Knowing GenAI’s assumptions enables us to decide how, when, and even if, we’re to use the intel it provides.

Let’s remember again that GenAI can only tell you what it knows. And so you must ask it how it knows what it knows. Because there’s a chance you don’t need the information it provides, or trust its sources in the current form.

Unless you give it more data, so it can tell you what you actually want to know. Simply put, ChatGPT says something is common knowledge when the information is commonly accepted. But this means when it believes something is widely accepted.

Back to top

The value of GenAI’s information

Unlike some other LLMs (large language models = the basis of all GenAI), ChatGPT only draws upon the information you share with it—and its own internal search browser. Contrast this with perplexity.ai Claude model, which derives sources directly from public internet sites. So knowing the limitations of ChatGPT’s information allows you to further assess its value.

Part of valuing GenAI’s information, and particularly here with ChatGPT, is continuing to ask it questions until you get the answers you’re looking for. Or, at least until it explains to you why it cannot answer you directly. After an answer is generated for you, remember that you can keep diving further in, with follow up questions.

So I asked it how it knows the information it has isn’t current or specific enough. (I asked about other things, too, like about trust and security, but we’ll save the crossed-out parts for another piece!) In this conversation, it alluded to accessing intel beyond its training data, which we assume may be out of date or irrelevant by the time we query it.

I commend ChatGPT’s honesty here, with its awareness of its training data limitations, and it looks to us—its collaborators!—to provide sufficient context cues. The words and phrases we use tell GenAI not only what to pay attention to, but also when it should focus on the thing.

As the above shows, if our query is about something noticeably recent, trending, or requires distinct data, GenAI is trained to “see” this because of our word choices. It’s not perfect, but if you don’t like what it’s telling you, besides giving it more data, try asking it in a different way.

Back to top

Providing feedback for adjustments

There are other tools at your disposal in helping to structure your GenAI, besides telling ChatGPT to be brief when reporting, or asking for only a single sentence. Consider checking the thumbs up or down in the box provided after a response is generated. A thumbs up confirms your satisfaction, telling GenAI it’s on the right track.

A Thumbs down to the question “Is this conversation helpful so far?” generates a popup for you to provide additional information.

There are a handful of auto-populated options, like “don’t like the source it cited,” “not factually correct,” or “don’t like the style.” But crucial ones to consider are “didn’t follow instructions,” “refused when it shouldn’t have,” and even “being lazy” – ChatGPT 4 has been criticized as early as November of last year because it sometimes outright refuses to do something simple, or even something it’s previously done for you.

Back to top

Actionable next steps

Remember how I said that GenAI works best collaboratively because you’re in conversation with it? Think of this as a high-level recipe for writing usable prompts, and repeat each step whenever you supply it with new information, or whenever it introduces something new:

✔ Tell GenAI the topic and ask what it knows about it

Or, if you’re coming back to a previous session, remind it about the topic by asking if it remembers. It will tell you it does.

✔ Limit the length of response

Without specific prompts, GenAI is prone to ramble, because its baseline is larger information delivery. If you only want three sentences, say so.

✔ Give it more context and additional parameters

Maybe it's only considering this year’s information but you want the past three combined. Supply the info and tell it why this additional context is important.

✔ Ask it how it knows what it knows

What about where the sources are located? Why the information it provided is credible.

✔ Follow up questions, repeated until you’re satisfied

GenAI will never tire. It will keep going as long as you do, generating one answer after another. Beware, it might start repeating itself, or even lying (like telling you something exists, when it doesn’t). But as your assistant, it will have your back til the very end.

Back to top

Final thoughts

This advice on writing prompts applies to our current understanding of best practices for interacting with GenAI. But this tech is ever-evolving. Assuming you have all you need to know about the tech, and stopping your development there, will mean failing to adapt alongside this powerful tool.

Taking an honest look at GenAI can be scary, but taking a hear-no-evil, see-no-evil, speak-no-evil approach can leave us researchers in the dust. A dusty researcher isn’t asking the right questions—an adaptive researcher uses all available tools to explore and find trusted results.

Back to top

You may also like...

Joey Brooke Jakob is obsessed with context, meaning, and the tools we use to communicate. Outside of creating, collaborating, and consulting on simplified user experiences, research methods and operations, she plays with and writes about GenAI and fantasy baseball.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest