Tesla, Chatbot and Grok
Digest more
AI, Grok
Digest more
Researchers say popular mental health chatbots can reinforce harmful stereotypes and respond inappropriately to users in distress.
Generative artificial intelligence tools like ChatGPT, Gemini, and Grok have exploded in popularity as AI becomes mainstream. These tools don’t have the ability to make new scientific discoveries on their own,
Happy Tuesday! Imagine trying to find an entire jury full of people without strong feelings about Elon Musk. Send news tips and excuses for getting out of jury duty to: [email protected]
A recent study by Stanford University offers a warning that therapy chatbots could pose a substantial safety risk to users suffering from mental health issues.
Explore more
Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University.
Chatbots may give students quick answers when they have questions, but they won’t help students form relationships that matter for college and life success.
Across the board, the AI users exhibited markedly lower neural activity in parts of the brain associated with creative functions and attention. Students who wrote with the chatbot’s help also found it much harder to provide an accurate quote from the paper that they had just produced.
Built using huge amounts of computing power at a Tennessee data center, Grok is Musk's attempt to outdo rivals such as OpenAI's ChatGPT and Google's Gemini in building an AI assistant that shows its reasoning before answering a question.
People are leaning on AI tools to figure out what is real on topics such as funding cuts and misinformation about cloud seeding. At times, chatbots will give contradictory responses.
According to a research scientist, here's how people are using AI in dating and relationships — and whether it's disingenuous to your partner.