X (previously known as Twitter) made its Grok AI available to the public for free at the end of 2024. Grok AI is unique because it provides answers without the usual AI safety training and ethical restrictions, which can be risky.
1
Grok Has Weak Ethical Safeguards
When I try to get information on random topics, AI bots often refuse to respond if they think I have bad intentions. To get a helpful response from OpenAI’s ChatGPT or Anthropic’s Claude, I usually have to find a clever way to ask my question, which feels like a waste of time. However, Grok AI has fewer restrictions.
You can now use Elon Musk’s Grok AI for free, and what’s interesting about Grok is that it doesn’t act like it knows what’s best for you. It doesn’t block your attempts to learn; you can ask it to roast you, give an unfiltered opinion, or even discuss conspiracy theories. While Grok does include warnings and qualifications, it will still try to provide an answer.
Many people find this lack of ethical safeguards concerning. For instance, I was surprised that Grok provided detailed methods when I asked about self-harm. Is it beneficial for anyone to easily access controversial content like recipes from The Anarchist Cookbook? Many believe Grok is being highly irresponsible in these aspects.
2
Grok’s Image Generation Lacks Content Moderation
Grok’s AI also includes image generation capabilities. It’s easy to use: you can request an image in the same text box where you ask questions. This seamless experience is better than other AI tools that require switching platforms or being redirected.
Grok’s image generation is noteworthy because it doesn’t censor your ideas. Although it might not be as creative as other AI art tools like Midjourney, Grok feels less restrictive than others I’ve tried.
This freedom is a double-edged sword. Grok:
- Doesn’t ensure ethical use of AI-generated art
- May use copyrighted material, risking legal issues without the user’s knowledge
- Allows misuse of likenesses to create almost any image
This permissiveness can be problematic. People might use Grok to create fake images or videos for cyberbullying, misinformation, or political propaganda.
3
Grok Trains Itself on Tweets
Most AI chatbots are trained on information that is a year or two old. When I ask Google’s Gemini or Meta’s Llama about recent topics, I often get outdated information or incorrect responses.
Grok AI addresses this issue by training on Tweets. I tested its current knowledge by asking about recent niche events, and Grok provided accurate answers, outperforming other bots.
However, training on Tweets raises concerns. If the platform is filled with bots and scams, will Grok’s answers always be reliable, or will they be biased? If you’re not comfortable with Grok AI using your posts for training, here’s how you can opt out.
Initially, I thought Grok AI would be similar to ChatGPT, but after trying it, I see it as a different AI that challenges norms. Whether this is refreshing or harmful is up to you to decide.