I wrote an editorial on the view of GenAI tech from execs and someone commented that they wanted references for results. Most of us might take a result from a co-worker and use it without asking for any proof of where it comes from. If we trust our co-workers, this can work well. If we don’t, then we might want to know where they learned this and how they know it works.

For GenAI, we might want some references to help us learn more or check something. Or, more likely, learn how to modify something that isn’t perfect.

In this sense, GenAI can be better as this gives us the references we need to learn how the answer was generated.

This post looks at how I got references in a few cases using Claude, Perplexity, and Copilot.

This is part of a series of experiments with AI systems.

Asking a Question

I decided to ask a question that I was asked by someone and I used Copilot to find the answer. Here is my prompt:

I need to schedule a Powershell script to access a SQL Server and want to use a managed service account to run the script. How can I do this in powershell

Let’s look at how the various tools I’ve used respond.

Claude

Claude.ai is from Anthropic and it’s a tool I’ve enjoyed using. When I pasted in the prompt, I saw this response:

I got to the bottom and no references, but when I asked for them (see my prompt in the image), I got a few to look at.

2025-05_0223

That’s one of the tricks with AI. You can ask it to explain itself.

Perplexity

My second test was perplexity, which I started to use a bit after Grant mentioned it.

2025-05_0220

Right at the top, I get some references that I can click. When I get to the bottom, I have a list of possible future prompts, which aren’t references, but they get me thinking about things I might need to consider.

2025-05_0221

That worked well and I liked having the references, though I’d prefer them at the end.

Copilot

I decided to try Copilot inside of VS Code, which is where I’d likely use it. I entered the prompt as a comment in a blank .ps1 file. After my first line, it suggested a few more comments to give context. I accepted a few and then added the last #include references line.

2025-05_0224

I didn’t get references and the generation stopped at the #End of script line.

I then added the comment below asking for URL references.

2025-05_0225

Each of the items above appeared as a line in italics, like other suggestions. I hit Tab to accept and I’d get another reference. When I saw the last one repeat, I stopped the responses.

Summary

I’m not evaluating the code here, just trying to see if I could get some idea of where the code came from. In this case, I don’t deal with MSAs a lot, so it’s good that I can get some URLs to check docs or get more options other than the code I found.

If I asked this question on Stack Overflow, I might get the same code (or similar), but getting URLs or references isn’t always easy there. Some people provide them, some don’t, and some might just close my question and say I should have searched better.

I think the GenAI does a good job here of giving me a starting point, and a place to go, which is helpful.

Share.
Leave A Reply