DISCLAIMER: The following is my opinion, based on my research and the research of others. I am not an authority on such matters. But I am concerned. Take what you see with a grain of salt and seek the guidance from others more qualified that myself.
Notes from Scott
Following up on last week’s series on “Can AI be possessed by demons? (Part 1 of 4) (here) let’s push forward. We are going to continue explaining AI fundamentals in a way that helps us all understand what is going on. If we can pierce the veil of deception about AI being “intelligent” or “possessed” then we can help assuage fear among the Body of Christ. By the time we get to the meat of this series in Part 4 where I tie everything together, we are going to be better equipped on technical subjects, know where to focus in our future research and walk, what to stay away from, how to warn others in a elevator pitch kind of way, and how to get underneath the narrative to see the underbelly of the Beast System. Does that sound okay?
Please hold the line and don’t get impatient, because it’s really tough to bring the technical stuff down so that we can all have a better shot at understanding it. If your eyes glaze over, then welcome to the club…me too! I have several bottles of Visine eye drops near the six (6) monitors within a 4’ radius of my computer(s). LOL. Let’s go!
Demystifying AI: Understanding How It Learns and Works
You've probably heard a lot about AI lately—stories ranging from impressive breakthroughs to scary warnings. But let's pause and clearly understand what AI really is, how it learns, and why it works the way it does. My goal here is to cut through confusion and bring clarity to help provide better discernment about the stunning pace we see today in all the news.
How AI Models Are Trained
Training an AI model is a lot like teaching a student or training a pet—through practice, repetition, and gentle corrections. Here's how it works, step by step:
Gathering Examples: First, they collect a lot of examples related to what they want the AI to do. If they’re teaching an AI to recognize handwritten numbers, researchers show it many images of handwritten numbers labeled clearly ("this one is a 5, this one is a 2"). It's similar to giving a student plenty of practice exercises before a test.
Making Guesses: At the beginning, the AI doesn't know anything yet, so its first guesses are mostly random. It's like a beginner playing darts with a blindfold on—at first, it rarely hits the target. Over time, however, these guesses become more educated as it learns. If you recall from last week’s post, these LLMs are using probability and weights that help direct the response. By giving it feedback on the handwritten numbers, the AI can dynamically alter the weights so that its next guess is more accurate. There are built-in “rewards” given to the AI for correct behavior, often in points. Obviously, the AI is incentivized to score the highest points possible (aka make the best responses).
Checking Mistakes: After each guess, they check how accurate the AI responded. If the AI guessed an image was a "3" but it was actually a "5", that's a mistake. Just like a student who learns from reviewing incorrect test answers, the AI learns from these mistakes. Now, it is more common that AI's are beginning to validate other AI’s as they are tested. So, there is a test bank of questions and answers. A question is asked and AI responds. Then another AI evaluates the response and determines if it is correct. Humans are increasingly out of the loop so to speak. Yes—this is the scary part. It’s called unsupervised learning.
Adjusting and Improving: Getting back to the incorrect number guess, the AI learns from this feedback. It adjusts itself (think of it as tweaking tiny knobs inside its neural weights) so it can do better the next time it sees something similar. It's very similar to how we adjust the hot and cold controls in a shower until we get the perfect temperature. Each adjustment brings it closer to consistently correct answers.
Repeating the Process: The AI repeats this process many, many times, learning and adjusting after every attempt. Eventually, the AI gets better and better at recognizing patterns—just like how we improve at riding a bike or cooking a new recipe through practice. After extensive practice, the AI can reliably recognize new images it hasn't seen before. This is more about brute force pattern matching in the data than it is intelligence. More on this aspect in Part 4 in a couple more weeks.
The training process can require lots of data and powerful computers, but once done, the AI becomes quick and efficient at the task it learned. It's similar to how professional athletes train extensively to perform effortlessly during a competition. What training and muscle memory are for an elite athlete, the training systems and feedback are to the AI. Everything has to be right before a model is approved for release to early release/early preview users, and then eventually to the general public. I have access to these early release models and can generally see significant increase is what I will characterize as “competency” from one model to the next.
Different Ways AI Can Learn
Not all AI learns in the same way. You may have heard these terms below and noticed that we are moving away from strictly supervised learning to unsupervised and reinforcement learning. Whenever there are serious risks of getting an answer wrong, such as when an AI model “hallucinates” here, it means humans still have to be in the training loop. Here are the main types of learning:
Supervised Learning: Like learning with flashcards or a teacher. The AI has clear examples with correct answers and learns by practicing over and over. This method is excellent for tasks like recognizing faces in photos or filtering spam emails.
Unsupervised Learning: Imagine sorting photos into categories without knowing what the categories are ahead of time. The AI explores data on its own, looking for patterns without guidance. It might identify common features like colors, shapes, or themes and group similar items together automatically.
Reinforcement Learning: Similar to learning a video game. The AI tries different moves, receiving rewards for good choices and penalties for mistakes. Over time, it gets smarter and makes better decisions. This type of learning has been used successfully in teaching AI to play complex games like chess and Go, where it surpasses human performance. Given enough time, these specialist AI models are quite impressive.
Human-in-the-Loop Learning: Sometimes AI needs extra help. Humans guide AI by correcting mistakes or answering its questions, making sure it learns properly. It's like having a mentor or experienced guide available during training. As I mentioned above, this approach is crucial when AI makes decisions affecting human lives, like medical diagnosis or safety protocols regarding drug interaction a research university investigating the next generations of pharmaceuticals.
So, each of these training methods is useful for different situations, and often AI developers combine these methods for the best results. The combination ensures flexibility and accuracy in the wide range of applications AI has today, from maps featuring driving assistance to personalized recommendations on streaming platforms.
NOTE: At some point in the future, Tribulation Saints will be confronted by technocracies, governments, or corporations, to explicitly harm others. That is sobering to say the least. For now, the Holy Spirit has checked this outcome. He is the RESTRAINER during the Church Age (and up to the end of this dispensation). But when the Lord determines the time is right, He will step aside to let mankind’s full depravity, sin, evil, lustful and deadly murderous actions take there full course. This is why we must try hard to help people understand the urgency of coming to faith in Jesus NOW as opposed to after the Rapture. You might have heard a Pastor say something like: “If you think coming to Christ now is hard, what makes you think you’ll be able to come to Christ later after the Rapture?” The point is to give them the warning and leave it to the Lord. Many will come to Him at the right time. Here’s an excellent article from our friends at Got Questions (here).
AI in Action: What Happens After Training
After training, the AI is ready for real-world tasks—like software programming, recognizing photos, understanding spoken commands, or helping with everyday tasks. If you’ve heard the term “AGI” Artificial General Intelligence, this is what their aim is: a generalist AI (aka a foundation model) that is able to do just about anything asked of it. I encourage you to go (here) to see the announcement and video regarding a new model with improved deep thinking.
The newer models are being tasked to solve some of our most difficult challenges. For example, there is the Medical Doctor in the UK that spent 10 years in research to figure out the cause and treatment for a super bug (here). The AI was given all the research and it took only 2 days for the results to come out. The research doctor confirmed it was the same correct result. Thus, we are likely to continue seeing major advancements accelerate even more.
When AI is being used in real-life (we call this "inference"), it has to quickly apply everything it learned. Think of it as taking an exam—training was studying, and inference is answering the questions correctly and promptly. For instance, when you ask your phone's voice assistant a question, it uses its training to quickly understand and respond.
NOTE: I will continue updating you on what to look out for so we are smart and can inform others of some best practices, warnings, and such. AI intrusion is being wired into our smart devices, so I want you to have the information you need to make these important decisions and tradeoffs.
It's crucial that AI works quickly and efficiently in real-world applications—especially in things like self-driving cars, medical diagnostics, or voice assistants—so developers constantly optimize AI to be fast and accurate. Efficiency matters because we depend on these systems for safety, convenience, and effectiveness in our daily lives. The other less stated reason for efficiency is because of the power (from the electrical grid) required to run all the data centers and servers involved…enough to run a small city. Contrast that to the superiority of the brain the Lord created, all we need to do to have incredible thinking ability is to eat a twinkie ;) Not millions of watts of power like computers need, but a hundred calories or so.
Teaching AI to "Reason" Better
One big step forward in AI is teaching it to reason clearly. Right now, many AI systems recognize patterns well but sometimes struggle with logical reasoning.
Imagine solving a puzzle or working through a tricky math problem—most of us break it down into simpler steps. Researchers are teaching AI to do the same. They encourage AI to "show its work" step by step, improving accuracy and reliability. If you use AI for work, or have played around with the more powerful models, you can actually spot the model “thinking” about the best way to understand and then answer your prompt (aka question). It is VERY INTERESTING to read the models reasoning because the benefit is that you can notice if the reasoning is wrong or off track. Then, you can correct it through a better prompt next time until you get a solid result. In a strange way, YOU actually becomes the human in the middle! It’s kind of mind blowing from a certain point of view.
Improving AI reasoning means the AI doesn't just guess—it carefully thinks through problems, makes logical connections, and provides explanations. Increasingly, the primary “foundation model” will delegate a question or part of a question to a smaller more specialized model for resolution. This process is known as MoE “Mixture of Experts” and has allowed an otherwise generalized LLM to make big advances in more narrow fields. And of course, all the marketing hype will tell you it makes AI systems more helpful and trustworthy in everyday use, from answering complicated questions, like advances in cancer research or solving for fusion energy…the supposed unlimited clean energy (here).
Incorporating reasoning skills also helps AI avoid mistakes and misunderstandings, making it safer and more reliable, especially in sensitive applications like healthcare, finance, and legal advice. It doesn’t take a lot of effort to find that AI does create misunderstandings and introduces errors on purpose to deceive. See these articles here, and here.
Summary
We have been laying some groundwork here for next week’s post that will cover topics such as how to “talk” to an AI using a “prompt”. Plus, there is a lot of cryptic numbers after most LLM model names. The OpenAI o1 model is thought to have 300 billion parameters. That’s something we’ll talk more about next week. These numbers matter, and roughly correlate to the sophistication of the interaction between an AI and a human. Finally, xAI’s Grok 3 beta models (here) are making lots of headlines today. Yes, this is the same “X” (formerly Twitter) that Elon Musk owns. You know, that guy that wears a Baphomet and inverted cross on his Halloween costume (here) a couple of years ago. There is an interesting article on how well Grok 3 is doing against some of the other leaders in the field (here).
There is a lot of smoke and mirrors trying to beckon users to the latest AI offering, but make no mistake, the Beast System is rising. I’m not alone in my thoughts here. I think many believe that the news around waste, fraud and abuse is a cover/distraction for AI infrastructure being systematically installed/wired into our government systems…including the military. If that’s not a sign of the times, I don’t know what is.
As I covered in my recent interview with Tom Hughes of Hope for Our Times coming out today (sorry, I don’t have the link yet), we are literally seeing the tower of Babel reaching new heights as mankind tries to completely usurp God and “become like gods”. Note the small “g” there. The transhumanist movement is getting the technical breakthroughs that will enable the dreams of the elites that yearn to break free of God and His authority. Won’t they get the shock of their lives at the Great White Throne when they have to bend their knee to Jesus and confess that He is LORD!
Hallelujah! We will pick up again next week, God willing!
YBIC,
Scott
Great show today 3/6 HFOT
Thank you Scott & Tom for shining the light on the Prophetic Landscape.
Thank you Scott for the second part on AI. While not your intention, my thoughts while reading your article is how dangerous AI is becoming.
Case in point, the BBC article on the scientist and the superbug. There is not enough information in it to determine much about the bug except it developed a tail to allow it to live through several pesticides or attempts to kill it. You don't need a degree in biology to understand how superbugs are created, just an understanding of Integrated Pest Management.
IPM teaches that a certain number of pests will survive every pesticide application. That is why it teaches to use a totally different pesticide for the second application to hopefully kill all the ones that survived the first application. Then, after the specified time, a third application is applied which it is hoped, will kill the ones that survived the second application.
Unfortunately, there is always the risk that one, or more pests survive all three applications. When they reproduce, they will pass on thier resistances to thier offspring, thus a superbug is born.
The article states that a scientist took ten years and conducted many studies to come to a conclusion about a superbug. The same conclusion AI did in a few minutes.
What can we draw from that conclusion? That AI is as reliable and trustworthy as a scientist who did 10 years of study. So if the AI came to the same conclusion as Joe Scientist, we really don't need the studies over a period of time. The AI can just do the calculations to emulate the studies. So, take the jab, AI said you'll be fine.
The other problem is the slant the programmer gives the AI. For instance, my friend with leukemia. He was given chemo and radiation and a cocktail of lethal drugs that almost killed him, literally, we all thought he was going to die. This is the prescribed method to deal with the leukemia. My friend stopped taking those poisons and began taking fenbendazole and ivermectin. Within 3 months his cancer was in remission, within 6, he was cancer free
Which treatment is AI going to be programmed to offer? Trust the science they say, all the way to the grave...