The rise of "AI" (mostly Large Language Models, or LLMs, like ChatGPT, Claude, Perplexity, and so on) does enable a lot of "productivity hacks". However, Microsoft had warned us back in Jan 2025 that the most you rely on them, the more atrophied your critical thinking skills. Most "knowledge workers" who admitted to using AI tools, only use their own critical thinking skills to "fact-check" the LLMs. This suggests that the "average user" may be doing even less than that.
This bodes ill for the average user, as they seem to regard ChatGPT and LLMs as some sort of generic "expert", when it is nothing of the sort. Indeed, merely by browsing /r/cybersecurity_help there are a number of topics where the poster openly admitted to "I checked my logs with ChatGPT..." when they lacked even the skills to fact-check the LLM they used. They suspected something, and they wanted ChatGPT to confirm their suspicions.
But that's not the actually worrying part. Instead, Microsoft security is ringing the alarm: scammers are using LLMs to craft their latest scams to enhance their social engineering... by leveraging every sort of fakery possible, from fake website to fake job posting to fake customer service chatbots, because making them is so much easier with LLMs consolidating such knowledge.
Be wary out there.