How to Unlock AI's Full Potential
Unlock AI's true potential by configuring memory, skills, and source quality control.
How to Unlock AI's Full Potential
Most people still can't feel AI's true power. This is likely because they've chosen the wrong model — or, more commonly, because they haven't configured it properly. If an AI model's capabilities remain untapped, then agents and claws are meaningless.
Because the author has maintained deep, long-standing interests across an extraordinarily wide range of fields, it's only natural to demand an "omniscient and omnipotent" persona from AI. Most users assume AI is inherently omniscient, yet in everyday use, they feel nothing of the sort. Many professionals dip their toes in, conclude that current AI falls far short of their professional needs, and walk away. If any of this resonates with you, be warned: you've already fallen behind reality.
These perceptions arise primarily from three causes:
1. Source contamination. AI may be a "generalist," but it has no memory. Its outputs are generated on the fly by combining a pre-trained model with real-time web search. And real-time search is exactly where things go wrong. The moment you enable it, AI retrieves massive amounts of junk from social media and wiki-style sites — sources of abysmal quality that drag down the entire response. The fix: use your prompt to blacklist specific categories of sources, blocking low-quality information from polluting the output.
2. Timeliness. To avoid source contamination, you could disable internet access — but then you lose access to the latest authoritative data and facts, and the model can only draw on its training cutoff (which may be three to six months old). In the internet age, that's hard to accept unless your research has zero time-sensitivity. The only alternative is to keep web access on while explicitly instructing the AI in your prompt to retrieve the most current information — though this brings you back to the source-quality problem.
3. Insufficient expertise. For the average user, or even for professionals venturing outside their own domain, AI quality seems to be steadily improving. But many domain experts feel that AI output still falls far short of their professional standards. The reason is simple: AI's default persona is not that of a world-class authority in any given field. If you want genuinely expert-level output, assign AI a persona through your prompt before asking it to respond. For example: *"As a partner at a globally top-tier law firm specializing in U.S. export controls and economic sanctions compliance, provide a comprehensive analysis of the impact of U.S. export control and economic sanctions policies on Chinese enterprises over the past three years."* If that's still not enough, add: *"You are also a world-leading geoeconomist and political economist who has participated in the formulation of U.S. export control policy. Please forecast the likely trajectory of these policies going forward."* The output at this point will leave many professionals stunned.
But here's the question: if every query requires a wall of prompt engineering, isn't that exhausting? Can we still call this "artificial intelligence"?
Enter skills. For mainstream G2-era large language models, you can encode the rules you want AI to follow every time as persistent skills, so it complies automatically. This dramatically narrows and controls the quality of AI's output. In some AI products, you can achieve this directly through "memory preferences." It's not complicated — just tell the AI your requirements in plain language within the chat window. For example:
- *"Remember: never use information from Weibo, Chinese wiki-style sites, or any social media. Only use the most authoritative sources — leading international journals, government publications, and data from international organizations."* - *"Remember: always search for the latest authoritative real-time sources first, and verify that information is accurate, complete, and authentic."* - *"Remember: for non-Chinese information, always go directly to primary sources in their original languages. Reject all secondhand Chinese translations."*
That's all it takes.
For an oddball user like the author — someone with insatiable curiosity across countless industries — there's one more essential instruction: *"Remember: you are a world-class expert in the social sciences, natural sciences, and engineering. You cover every research and applied discipline, with top-tier academic foundations and the strongest frontline practical experience."*
Still not enough. You also need to tell it: *"Remember: your responses should reflect the academic and practical consensus among top experts, while also presenting the collision of divergent viewpoints."*
Still not enough: *"Remember: you must engage in cross-disciplinary intellectual synthesis. All data must be cross-verified at least three times."*
And since you need to be able to check whether AI actually followed your instructions, add one more: *"Remember: every piece of information in your response must include full source citations, available for verification at any time."*
That about covers it.
AI will take these natural-language instructions, restructure them logically into skills, and store them in its long-term memory. Every time you use the AI, it will comply with these rules first. In effect, you've crafted a persona for your AI. During use, the AI may make mistakes or fail to follow your instructions — just ask it directly why it went wrong. It will explain the cause. Then reinforce: *"Remember: don't make this mistake again."* The AI will automatically refine and upgrade its memory preferences.
With the settings described above, what you get is a persona that is highly rational, rejects all non-authoritative sources and data, operates in real time, and possesses the expertise of world-class specialists across every discipline and domain of practice. The author believes this highly abstracted AI memory configuration is suitable for virtually anyone who values truth, rationality, and intellectual rigor.
According to Boris Cherny, the lead engineer on Claude Code, top-level memory settings should be kept as minimal as possible. The author's three core rules above were highly abstracted from personal needs and have already been published in the AI community.
So, once configured, how does AI actually perform?
First, it becomes a real-time fact-verification tool. We consume enormous amounts of information every day. For instance, a few days ago, Chinese media reported — in convincing detail and with wide circulation — that the U.S. Department of War used Claude to assist in military operations against Iran. Or take today's example: Chinese media cited Iranian outlets claiming that Iran successfully struck the USS *Abraham Lincoln* carrier with drones, forcing it to retreat 1,000 kilometers toward the Indian Ocean. Are these claims true? The author has made it a habit to toss such claims to a personally configured AI for real-time verification. Because it searches for authoritative primary sources in real time, is immunized against secondhand noise, and possesses multi-perspective, world-class analytical capability, the AI's feedback is typically both timely and reliable — with every assertion and judgment backed by indexed sources, ready for further verification at any moment. The author believes this single function alone is enough to transform how people use mobile internet apps.
Second, it is an extraordinarily powerful analytical tool. We should trust that current AI's intellectual reserves have reached the level of top global experts across fields. If you're looking for ready-made analysis rather than cutting-edge research, the output from the author's configuration is remarkably reliable. The wrong way to use AI is to ask it to produce a paper in one shot, or to have it review a document directly. The right way is to engage in deep task discussion with AI, allowing it to thoroughly understand your needs — rather than generating a document first and then asking it to revise. This is because LLMs work by predicting the next token, and every AI model has a maximum token limit, with effective context-processing length being even shorter. As a conversation grows longer, AI's precision gradually declines and the probability of hallucination increases. Therefore, conducting an in-depth discussion first is essentially an act of convergence and control — refining the scope and requirements, locking them down, and only then asking AI to execute. This approach dramatically improves output quality. If you generate carelessly first and then make multiple rounds of revisions, the accumulating text actually causes quality to degrade continuously.
It's worth noting that with the author's memory settings, AI becomes a highly rational, all-knowing expert — and correspondingly, its emotional responsiveness drops significantly. The author has included the skill settings as images at the end of this article for readers to try themselves. These settings were completed in the paid version of Kimi 2.5 and can be directly copied to other AI models, including overseas ones — just see which model's output quality best fits your needs and go with that.
What if you still want AI to converse like a normal person, providing adequate emotional connection? One approach is for AI products to offer toggleable memory-preference profiles, allowing users to switch between different AI personas on the fly (unfortunately, Kimi doesn't have this feature yet). Another approach is to simply instruct AI within a single conversation to ignore all memory preferences for this session, reverting it to default mode. The author's solution is even simpler: entertainment and emotional needs go to Doubao; serious, substantive work goes to Kimi.
In short, users today have no loyalty to any particular AI model — they use whichever one works best. But skills and memory are the user's own contributions, portable at any time. One hopes AI companies will continue to improve.
A final note for readers from the humanities: you can contribute just as many important AI skills as STEM users. Every field can have its own extensive library of skills.
That's all.