Why Fear AI

image_pdfimage_print
ChatGPT generated

I use AI almost on a daily basis and have even built an AI from scratch. I like to think I have a pretty good understanding of how this technology works. Constantly, in the news, podcasts, and other media, I hear people comparing current AI to Skynet. This is one of the most misguided comparisons I’ve ever heard. What we call “AI” is not truly intelligent—not by a long shot. Without our input, these systems would be useless, just inactive programs. Let’s take ChatGPT as an example. You have to give it a prompt for it to do anything. ChatGPT doesn’t message me out of the blue and say, “Hey, wanna play a game?” I have to initiate interaction by providing a prompt. These so-called AIs are just highly advanced programs that can add parameters to the context of the prompts they receive.

Have you tried building a GPT model? If you haven’t, you should, because it will help you understand just how limited these systems are. All these billionaires claiming that AI is dangerous are trying to push for government regulations that would create higher barriers to entry for competitors. They benefit greatly if the government says, “You have to hire an AI oversight committee before you can develop any AI program.” For a programmer working from home without the support of a large company, such requirements would be prohibitive.

AI Is Not There Yet

In my opinion, AI will never reach the level of danger depicted in Skynet. It might get to the point where it’s as dangerous as a fixed-blade knife. A knife isn’t dangerous by itself, but in the hands of someone who doesn’t know how to use it, it can be. I can see AI evolving into a super-advanced chatbot that we interact with daily and ask to perform tasks, but it will never be able to generate ideas on its own. Unless there is a breakthrough in hardware that allows AI to develop ideas based on what it sees or reads without any prompting from us, it will remain limited.

The Skynet idea of AI, or any science fiction version of AI, is just not feasible. There are individuals and groups attempting to create such AI, but I don’t see how it would be possible. Human beings are unique in how we develop and grow, and replicating that in a machine is not achievable. I don’t think it will ever be possible. I’m not losing any sleep over this, and you shouldn’t either. Always question the motives of anyone spreading fear, as there is usually an underlying agenda.

What ChatGPT thinks of my article

The article makes several compelling points about the current state of AI and the misconceptions surrounding it. Here are some strengths and areas for improvement:

Strengths:

  1. Personal Experience: The article benefits from the author’s personal experience with AI, lending credibility to the arguments.
  2. Clear Examples: Using ChatGPT as an example helps readers understand the practical limitations of current AI technology.
  3. Critical Perspective: The article critically examines the motives behind AI fearmongering, encouraging readers to question the narratives they encounter in the media.

Areas for Improvement:

  1. Structure and Clarity: The article would benefit from clearer structure and transitions between points. For example, separating personal anecdotes from broader arguments can help maintain focus.
  2. Grammar and Punctuation: There are several grammatical errors and typos that need correction to improve readability.
  3. Depth of Analysis: The argument could be strengthened by including more evidence or references to studies or expert opinions on AI capabilities and limitations.
  4. Balanced View: While the article rightly dismisses exaggerated fears, it could also acknowledge legitimate concerns and ongoing discussions in the AI community about ethics, safety, and regulation.

See it’s dumb

I fed ChatGPT my article and that was what it gave me then it frozen. If that doesn’t tell you something I don’t know what does.

Leave a Reply

Your email address will not be published. Required fields are marked *

RSS
Follow by Email
LinkedIn
Share