Using AI for Health and Longevity and Research - Your Favorite Prompts

I’m not as hard core with my prompts yet but I do like Perplexity, I have a Enterprise Pro sub for $40 a month. It seems to have learned my “style” so I don’t need to prompt it in the writing style after the first time I used it.

It’s quite versatile as an LLM agent and accesses;

Sonar
GPT 5.1.
Gemini 3 Pro
Kimi K2 Thinking (US hosted version)
Claude Sonnet 4.5

If you step up to the Enterprise Max version it adds
Claude Opus 4.5
o-3 Pro

It can use nano banana for graphics if you prompt it to do so, or in Auto mode it will use the best tool that fits your prompt / search criteria.

I’ve used it to generate product pictures and it does a great job (with the right prompts of course) It will go out and research the product, evaluate the look, feel, colour, etc of the product and use editable text in a label. I had tried a couple other AI graphics tools and they never got much right, wasted several hours just trying and failing on the text aspect. Perplexity produced perfect items on the second try.

1 Like

For those of you who have Venmo, look for an offer in your inbox to get one year free of Perplexity Pro. I was about to delete it!

2 Likes

Looks like Gemini is winning this race. The only reason Gemini (paid) is not my favorite AI is because it is slower, at least for the questions I ask, and you have to approve Gemini’s research plan before it starts the task.
The regular Google AI thinking (Gemini) is the one I use the most, followed by Perplexity.
The following is not just an opinion; it is where people put their money where their mouth is.

2 Likes

That’s one of the reason I like to use a good AI agent that can access multiple AI’s

Besides Perplexity another good one is www.you.com try the chat mode first

1 Like

When you are considering buying something - you can do a first pass by asking Gemini or other AI system to identify the low cost products and calculate the cost/mg or such. But you can also, when you narrow it down to a few vendors, perhaps on Amazon, do a really good cost comparison. I hate that Amazon makes it so difficult to compare the price/unit that you actually care about. So here is how you do it:

With Gemini, you can drop the web addresses (URL)s for the products you are looking at, and ask Gemini to evaluate them and calculate the cost per mg and compare them all. Its awesome. For Amazon, you only need this part of the url (not all the tracking codes, etc. afterwards).

Here is the URL from Amazon that you copy:

https://www.amazon.com/Natural-Factors-Oregano-180mg-Virgin/dp/B00DUHF9MO/

and leave a space between each URL for each product in your prompt.

To make it so you can copy and paste the results into this forum, be sure to specify:

Output Constraints:

  • Use Markdown formatting (Headers, Bold, Lists).
  • Do not use LaTeX or special characters that break simple text parsers.
  • Embed direct URLs to any external references cited.

See the result from this query here: Spice Cabinet Longevity – Thymol Triggers Mitophagy via Mitochondrial Uncoupling - #22 by RapAdmin

Please post your pricing if you do these analysis and it’s on one of the topics we’re discussing, to save other people the hassle of doing it.

image

What a lot of people are missing are benchmarks… workflows, workflows, and benchmarks…

If @RapAdmin want to summarize videos, create a benchmark first of X amount of videos and information you want in the summary.

Then create a workflow, test towards the benchmark, iterate, change models, etc.

I suspect a summary workflow would first create a transcript with speakers (difficult). When available, video understanding of charts/tables. Then create a summary of the most important.

Presumably coding models can do this custom solution today if not soon, do similar workflows for other tasks, might even run in the cloud.

I’m not clear on what you’re saying here. Can you provide an example?

2 Likes

You create a perfect summary for e.g a YouTube video titled “Bird’s aren’t real”.

When you’re testing a new chatbot/prompt, ask it to create a new summary for “Bird’s aren’t real” which you have the perfect one for, then open up a new chat with the best chatbot, and ask “Here’s the perfect summary, and here’s a new one, how well on a score of 1-10 did it summarize all the important parts”?

Repeat for 10 videos (presumably coded with codex), and in 10 seconds you’ll know how well your prompt/workflow is towards your own custom benchmark (10 videos and 10 perfect summaries), the more perfect summaries and e.g videos you collect, the more your benchmark will generalize.

So whenever a new chatbot version comes out you can use your own custom benchmark.

Or you can wait a few year and the frontier will do this in the background, at least that’s Sam Altman’s idea.