Featured

Paradigms #1:

The Digital Age: Shift

At the current rate that it is functioning, I do not think that the internet will last in this capacity. What will transpire is the rendition of the internet to an almost unrecognizable state. I am not brazen enough to believe that the entire internet will just faze out overtime, contrary, I think that it will increase in size and usage. So what does the seemingly perplexing statement that is written above really mean?

The great victory for the internet is the fact that it is free: free to use and freedom of use. There are no restrictions on the internet. It is a haven of free speech. A true modern allegory for a utopian society: people free to do anything, say anything, see anything. If someone is of age to remember the earlier stages of the internet, they would tell that there were many rabbit holes that could be explored. The amount of known websites was an ever growing variable. Many users had not migrated to a site of mainstay, and were more spread out throughout the virtual world. The internet was a brave new world, lush with discovery and unknowns. With such a sporadic landscape, moderation of websites were at the lowest. Many people were just there to take in the sights, in a childlike state and pace. Children never think of the future, which made the internet almost simple; a state of now that parallels the optimism of the Magellan era. In a similar sense, all things must come to pass, as did the arrow through the hopes of the discoverer.

En lien to the seasons, a harsh winter breeze serves as a reminder of the bounties of the spring time. The waning moon is a stark reminder of the eminent arrival of the bountiful sun that we all know will shine above us, as the seemingly endless night draws to an end. However, this cycle will repeat, as an acknowledgement of the traveled man. The internet is no different, as technology and society advances, the cycle will no doubt reach a point of bountiful usage and learning. However, we are reaching a critical winter in the online world, as more and more people join in the usage of the internet, so will the number of websites that are created. Although that appears to be the case, it would be more suitable to say that the internet is actually getting smaller. More and more people use the internet everyday, however they are not creating more websites. The large migration of users is not to the new websites that are created on a daily basis, contrary, they migrate to the websites that are already in heavy use. This presents problems that cannot be ignored: privacy and security are an anomalous property that many people may think they have access to, but in reality, the internet is no such haven. A looming shadow will dare show it’s face to us, and the time of bounty will quickly become an era of subtle evil.

I am sure that many people have heard or read of the Cambridge Analytica data leaks. Mass collection of user data and privacy brings images of a dystopian world, but many people have reluctantly accepted this as the nature of the beast. Contrary, it is never so easy. The mass collectors will continue to do their job, and this is only the beginning. The possibilities of these intrusions are so much more; mass surveillance, for start, seems like the logical next step. One can argue that data collection is already the same. However, I am talking of surveillance on a scale that is not in the same realm of understanding as data collection. Data collection is merely treading the waters for the oncoming flood. As consumerism reaches greater levels, so will the demand. It is logical to see that greater demand requires greater information, specifically the logistics and the essence of the demand. “Exactly why does product m fare better than product n in certain markets? Why does country A prefer this lifestyle over country B?” All of this is for one goal: simplicity. Simplicity of life sells well because the thought of peace of mind is highly marketable. Elevating lifestyle simplicity and tailoring products to the public will no doubt make life easier. Barriers that many people faced everyday were eliminated by similar means like the cellular phone or the car or even the many books translation. and now this facet will be taken even further. These statements all point toward an idealized utopia: a carefree life where everybody everywhere gets along. Now, that does sound like something mentioned before.

Attempting to live a life purely on the internet seems extraordinary and outlandish. Yet it is in the realm of possibility. In fact, it may even happen within the lifetimes of anyone reading this. The reason I say this is due to the culmination of my statements above. The internet is increasing in users, but the size of the virtual world is diminishing. Due to this, the world is growing more and more simple as the things we consume serve to simplify our lives through technology. As technology gets better, the need for physical stores will be nonexistent, and preferred areas of living will be dictated by delivery times. As barriers in our world decrease due to simplifications, the sparsity of human settlements will also decrease. Similar to what the internet’s user bases are doing, many people will begin to centralize around large cities as technology has simplified the needs of many humans. There is very little reason for humans to stay in the outskirts or rural areas of the world. One may argue price is a factor, much like any large city in the world. Eventually, they will be able to relocate, as the demand for living space within the large cities will skyrocket. It would be foolish to assume that this demand will not warrant expansions of not only the cities core infrastructures, but of living areas around the city. So, now that everyone has settled in a large techno-city, what is next?

The book “1984” by George Orwell is a book that I hate. The writing is clear and horrifying, with each page serving as another terrifying revelation after the other. It is admittedly an excellent book and why I despise it so. Through all that praise, the book does miss out on a key factor that Orwell could not have possibly foreseen, else the book would mirror the modern world perfectly: the internet. The power of the internet would have shortened the book tenfold, as it would have been obvious to see the outcome from Orwell’s perspective. Once the migration of humans, both on the internet and in reality, have centralized, the entire world would be rewritten. History is rewritten even today, and in a society of pure technocracy, it would be child’s play. Consumer products perfectly tailored to your every need through data collection will cause the birth of true mega corporations that know exactly what people want. The media and news outlets will stomp out the truth from every article, shifting the narrative to fit anything that the corporations want. A true mass indoctrination of the world will occur, as votes will be completely swayed due to the portrayal of any opposition as overbearingly evil. The worst part is that nobody will be able to do anything, and nobody will want to do anything. The internet will have simplified even the complexities of the human being and, in that, will restrict our way of life to that of a caged animal.

The purpose of this post is not to fear monger. I still believe in the human spirit to oppose simplicity and embrace chaos. Unfortunately, looking at this through a human’s perspective, it seems like a plausible outcome. Although unlikely, I think that people should spend the time to look for alternatives to their way of lives; because being stoic is a means to a happier life. As the saying goes, if something is too good to be true, it probably is.

Featured

First Impressions

Be yourself; Everyone else is already taken.

— Oscar Wilde.

I find myself, more frequent as of late, in a state of constant turmoil. Sometimes my emotions just jump around, almost sporadic like. There are times when I have an irresistible urge to speak, as well as times when I simply have nothing to say. As I toil away with my own thoughts, I ask myself: why am I me? Was I born to speak, or born to listen? Sometimes, I feel at peace being either, neither, or both.

This blog will serve as an archive of things that I personally believe(d). I welcome any and all comments.

This is the first post on my blog. I’m just getting this new blog going, so stay tuned for more. Subscribe to get notified when I post new updates.

The “Big AI” Era is Over.

Here is What the Near Future Holds

For the last five years, the AI industry has been obsessed with a single, rather brute-force strategy: Scaling. Basically, the bigger the better. We were sold a narrative that if we just fed more data and more chips into the “Transformer” architecture (the engine behind ChatGPT), it would eventually be categorized as the coveted “AGI”.

But in 2026, that strategy has started to lose steam. The giant models are too expensive to run, they still “hallucinate,” and they are too massive to leave the cloud. If you are a business leader or a regular user waiting for a “bigger” frontier model to help you solve your problems, you are looking in the wrong direction. The “Godfather of Scaling” himself has told us to stop.

The Verdict: Why the “Age of Scaling” is Dead

In late 2025, the industry witnessed a massive pivot from Ilya Sutskever (co-founder of OpenAI and the primary architect of the “Scaling Hypothesis”). After spending a decade proving that “bigger is better,” he reversed course.

In a landmark interview, Sutskever famously declared: “The 2010s were the age of scaling; now we’re back in the age of wonder and discovery once again.”

His reasoning is the business case against the old models:

  • Based on the “10,000-Hour” Problem, current consumer frontier models learn rather inefficiently. Think of the student who memorizes every answer key for 10,000 hours but still doesn’t understand the principles of the subject.
  • We have already fed the models the entire internet, creating a massive “Data Wall”; data is the blood that fuels the engine that is AI. If the entire internet has been fed into modern AI, where will data come from next? To get smarter using the old method, we would need 100,000x more data to scale, which simply doesn’t exist.
  • Efficiency is being gated in modern AI. Running it at scale to perform inference would be incredibly expensive and will only get more expensive as scale ramps up.

His new venture, Safe Superintelligence (SSI), is betting that the future isn’t about building a bigger cluster but finding a “new physics” of learning that is efficient and insightful.

The Hardware Reality: Why “Big” Can’t, and Shouldn’t, Fit in Your Pocket

Even if we could make the models bigger, we have nowhere to put them. To understand why the future is “Small and Specialized,” you must look at the physical limitations of the device in your pocket.

Think of an AI model as furniture, and your phone’s memory (RAM) as a small apartment, with limited room for said furniture.

  • The “Frontier” Model (e.g., pre-trained GPT-5, Gemini 3): The soon-to-be classic AI. This is a massive, wall-to-wall sectional sofa. It requires 250GB+ of RAM to turn on. It will never fit in your phone (which has ~12GB RAM). It must live in a warehouse (data center), and you must rent access to it. Even with this access, you still need to go to the warehouse to even use it. The benefit of such size is that it is generalizable. However, aside from research purposes, ask yourself this: “is this necessary?”
  • The “Bleeding Edge” Model (e.g., Mamba – 3B): This is a folding chair. It runs comfortably on just 2GB to 4GB of RAM. You can fit it in your apartment easily. In fact, you can own ten of them, maybe of different styles and colors. To redecorate your room, you can fold them up and swap them out as needed.

So what? This physical constraint is the single most important factor for the future of enterprise software and consumer usage. It means that privacy, autonomy, and cost are physics problems. As long as you rely on massive 405B parameter models, sensitive data must leave the building to be processed in a cloud warehouse you don’t control. Efficient (which translates to even more broader application and usage) models can be used to perform much more complex tasks at a fraction of the cost, with more control and less waste. The kicker? The performance will actually be better than the supermassive models of today.

The New Intelligence: Three Models Slated to Replace the Giants

Guided by Sutskever’s “Age of Discovery,” let’s discuss some architectures that act like various furniture types that are all small, efficient, and purpose-built. Remember, this is only the beginning, with much more to come with their own benefits and drawbacks.

Mamba & SSMs: The “Privacy” Engine

  • The Old: Modern massive models get slower and more expensive the more they read and memorize (i.e., going from 0b to 100b is exponentially easier and faster than going from 100b to 200b).
  • The New: Researchers at Carnegie Mellon and Princeton developed “Mamba” to process data linearly. It digests data as fast as you can feed it.
  • The Difference: A tiny Mamba-3B model can often match the performance of older models more than twice its size. This allows you to, for example, process thousands of pages of private legal or financial documents locally and more efficiently on a laptop, without that data ever leaving your building.

JEPA: The “Common Sense” Engine

  • The Old: Generative models try to predict every pixel or word, which leads to “hallucinations” (lying) when they get confused.
  • The New: Championed by Meta’s ex-chief AI scientist Yann LeCun, JEPA (Joint-Embedding Predictive Architecture) stops trying to predict details and starts predicting concepts. JEPA is one application within the family of World Models.
  • The Difference: Reliability. A VL-JEPA model can understand a video feed using a fraction of the computing power of standard models, while being far less likely to invent (lying) objects that aren’t there.

And more broadly, it is important to understand that JEPA belongs to a class of new model paradigms called World Models. Think of a World Model as the goal; an AI that can simulate the future to plan complex actions. Think of JEPA as the blueprint for building it. Because predicting every single pixel of the future is computationally impossible (and unnecessary), JEPA solves this by teaching the World Model to ignore the visual noise (pixels) and predict the underlying reality (concepts). Let’s discuss World Models below.

World Models: The “Agent” Engine

  • The Old: Chatbots can talk, but they can’t do. They don’t understand cause-and-effect. LLM wrappers might try, but they are guessing at best with no guarantee of certainty. This is why modern agents have such high failure rates, almost all caused by hallucinations; no amount of prompt engineering can fix an inherent problem, nor is it worth the time and constant monitoring. At best, modern “agents” are for workflow automation, not for reasoning.
  • The New: World Models simulate futures. They run an internal physics engine to ask, “If I click this button, what happens next?”, allowing the AI to truly reason rather than blindly guess.
  • The Difference: This turns AI from a chatbot into a true form of Agent. It allows a small model to understand and navigate a website and book a flight, rather than just writing a blanket itinerary or constantly feeding (expensive) context.

It is crucial to remember that Mamba, JEPA, and World Models are simply the current answers to the efficiency problem, not the final ones. We are in the early days of the ‘Age of Discovery,’ and the landscape is shifting under our feet. What looks like a breakthrough today, such as a 3GB model running on a phone, may be considered bloated in two years. The transition away from the ‘Giant Transformer’ isn’t just a one-time upgrade; it is the beginning of a new evolutionary branch where models will continue to get smaller, faster, and more alien to us.

Who cares?

So, this may sound highly technical and daunting. That’s because it is. However, this is no different from when the first iPhone came out; we all had to become tech and app literate to keep up with the changing mobile environment. We all had to learn what “gestures” were, what downloading and installing applications were, how to type on a touchscreen. However, what the iPhone did well was make it the simplest it’s ever been to use frontier technology. These new models are merely an extension of that; when these models become further commercialized it is not required to understand the technical details, the same way we don’t need to understand the backend code of applications on our phones either. We merely need to know what each one does and how we can apply it the most optimal way for our businesses and personal uses in the future.

References

On The Pivot from Scaling:

  • Sutskever, I. (2025). “The Age of Scaling is Over.” Interview with Dwarkesh Patel / Reuters. YouTube

On Efficient Sequence Modeling (Mamba):

  • Gu, A., & Dao, T. (2023). “Mamba: Linear-Time Sequence Modeling with Selective State Spaces.” arXiv:2312.00752

On Grounded “Concept” Prediction:

  • Assran, M., et al. (2023). “Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture (I-JEPA).” arXiv:2301.08243

Opinions #1:

The Commonality of Man #1: Social Standards

2019/12/15

I, as a human being, have an acute social nature. It feels natural to be with my fellow man. Having time to myself is also nice, however it is not a state that I would enjoy being perpetuated in. I firmly believe that many, if not all, of man all through history and through all of the unpredictable future seek companionship. I firmly believe that the effects of mutual trust between humans is an almost tangible asset, that many people in the world lack. That is where the point of this opinion piece will come in:

The human known as “me” is one of billions, however we all think that “I” am unique. That “I” am the one that stands out. Every single day, whether consciously or subconsciously, we seek affirmations of our uniqueness; like a faint gasp in a sea of ubiquitous pleas. It’s only natural: I find that I cannot morally condemn those who do more “gasping” than others, unless I ask to be seen as a hypocrite. I do have fears and prejudices for this growing population of attention-seekers, although I cannot put my finger on the stark reasoning that they possess in their minds to take such action. As such, I cannot simply point left or right. It is a subject worth talking about however, as anything in excess is morally bankrupt in it’s own right.

Seeking validation comes in many forms. Social media has lifted this want to new heights, allowing many to pursue a lifestyle around popularity. Some may say that this is a bad thing, that excessive publication of personal lives is immoral or virtuously brain dead. However, this post will not be agreeing with those vocal netizens. In fact I would argue that it is a good thing; I believe that having a voice in trying times is invaluable. Imagine a butterfly without wings, or a fish without fins. Humans, without social commentary on themselves specifically will be a kin to a moral noodle. I feel that having criticism from all perspectives is just as important as learning from all perspectives, because humans without cognitional judgement is simply a bag of flesh.

There are negatives to social media however, and it is in the complete opposite direction of my previous points. Being popular online means attributing your persona to the largest audience, hence the definition of popular. While right or wrong does not play a role, it is easy to see that playing to the basics of human cognition is the most simple method to building a following. An example is the term foodie: used as a personable tag, foodie as a term seems very lackluster in terms of depth. Humans must strive for the happiness and well-being of themselves and their circle, while food is a literal foundation of being alive. Being the mixing cauldron that the psyche is, it is no wonder that the two will form a connection in the precinct of the mind. Therefore, tagging oneself as such seems redundant, and cannot be seen as anything more than a ploy to grab attention. There are many such examples, so determine for yourself whether someone is of real note or just pandering to the masses.

Concluding this post is a self reflection in regards to the voice that anyone can do: Is what I am saying coherent? Is what I am saying interesting? Is what I am saying useful? Is what I am saying unique to me? Checking all these boxes will already put you at a higher standard than average. Dignity and reputation matter, after all: being seen as one of the agreeable masses is boring and leaves no discernible weaknesses. Weakness is just a step to strength, one should be careful not to be the meek and the lesser-willed.