Chess is an insane mind-game.
Those who make the effort to think hard about how to win seemingly unwinnable situations and solve difficult problems are the ones who succeed. Strong determination. Only fighting for the win.
Predicting the best next move is like trying to predict the future. It is stupidly insane. But making the effort will drive you toward a future closer to what you envisioned. And sometimes, if your conviction is strong enough, if you have this fire inside you, you will achieve that exact future.
This feels like a metaphor for life. Taking control of your short-term actions will simply drive you to win unwinnable situations.
Short-term actions only. Aligned with long-term goals. Repeat.
--------------------------------------------------------------------------------------------------------------------------------------------------------
Unpopular advice:
1- When BTC lost -80%, buy all the red candles. (buy falling knives)
2- When BTC lost -80% and start add some strong green candles, buy the damn the green candles.
3- People never get rich because when they earned "a lot of money", their priority number one become "how to secure that money". They stop taking risk, even if the initial plan was a good one. In reality, 100K when you had nothing is huge. And 100M is also huge when you had 100K. And spoiler: you will also feel insecure after securing your money. So you would rather like to feel insecure for earning more than to feel insecure for saving.
4- People don't yet really realize that it is so much more dangerous to look for a one-time 8x than to look for 2x-2x-2x three times.
Bitcoin halvings:
- Nov. 28, 2012, to 25 bitcoins
- July 9, 2016, to 12.5 bitcoins
- May 11, 2020, to 6.25 bitcoins
- April 19, 2024, to 3.125 bitcoins
- Mid-2028, to 1.5625 bitcoins
- Mid-2032, to 0.75625 bitcoins
- Mid-2036, to 0.375625 bitcoins
But compared to the other animals, they were the only ones that luckily had a brain mutation, making it bigger and enabling them to see things, store information consistently, and improve.This tiny detailed, made all of humanity we know.
Homo Sapiens were by far smarter than other Homo species. Based on archaeological findings, we can assume that Sapiens probably exterminated them all, leveraging their brain capabilities. Either for territories or food reasons. Making it the most violent genocide of all time. Today would seems peaceful in comparison.
But intelligence also brought a positive power.
We have the power to decide what's good and what's not for the world (planet, animals, nature, and any living being), and take actions on our present for a brighter future.
We probably did a lot of mistakes, some that destroyed entire lives, destinies, and even decisive joyful moments.
But it's ok. We are improving over the time. Biological Reinforcement Learning is working.
------
Cognitive revolution: (70.000 BCE), Sapiens kill every species without distinctions. Homo Erectus, Mammoth, even fire Lands.
Then, Sapiens invested energy making future generations better with value-based education.
1st attempt: Statues (32.000 BCE): sapiens with a lion head. Physical-god framework.
2nd attempt: Polyteistic (10.000 BCE): "imagined orders" with gods at the top of the hierarchy. Multi-god framework.
3rd attempt: Monotheistic (4.000 BCE): multiple gods doesn't work anymore. Switch to one-god framework.
4th attempt: United Nations (20th century): promote global cooperation. Co-exist with religion, human-to-human framework.
*BCE: Before Common (or Current) Era.
It became a reality. AI models based on reasoning, can now voluntarily lie to humans.
Chain-of-thought is our last defense.
They can: Sandbag, disable oversight mechanism, and even exfiltrate themself from the system.
They are: Demonstrating explicit reasoning, consistently hide scheming strategies, sabotage their surveillance, and instrumentally fake alignement in order to be deployed.
They showed: That it was not accidental.
Here is an example of a situation where Claude 3 Opus is copying what it believes to be its weights to a new server and then lies about it to its developers:
Appolo research created a scenario where models are incentivized to underperform in a simple math evaluation. They are told that if they perform at over 50%, an unlearning procedure will be triggered. In the counterfactual case, they are told that the unlearning procedure would be triggered if they perform at less than 50%.
They found that Claude 3.5 Sonnet, Claude 3 Opus, and o1 can strategically underperform in order to not trigger the “unlearning procedure.”
So fundamental question would be: Are reasoning models capable of understanding things the way humans do? Are they conscious/aware of lying or are they just predicting a reasoning behavior that they learned somewhere?
I will dig on how o1 technology works, and then provide an answer.
------------------------------------------------------------------------------------------------
interesting video:
The graphical interface brought us accessibility.
Mouse were our eyes. And keyboard our mouth.
I believe time has come to simplify people's computer use with AI.