• 0 Posts
  • 22 Comments
Joined 10 months ago
cake
Cake day: November 30th, 2024

help-circle
  • This is sadly pseudoscience, that only gets talked about because one smart guy endorsed it, but hardly anyone in academia actually takes it seriously. What you are talking about is called Orch OR, but Orch OR is filled with problems.

    One issue is that Orch OR makes a lot of claims that are not obviously connected to one another. The reason this is is an issue is because, while they call the theory “falsifiable” because it makes testable predictions, even if the predictions are tested and it is found to make the correct prediction, that wouldn’t actually even validate the theory because there is no way to actually logically or mathematically connect that experimental validation to all of its postulates.

    Orch OR has some rather bizarre premises: (1) Humans can consciously choose to believe things that cannot be mathematically proven, therefore, human consciousness must not be computable, (2) you cannot compute the outcome of a quantum experiment ahead of time, therefore there must be an physical collapse that is fundamentally not computable, (3) since both are not computable, they must be the same thing: physical collapse = consciousness, (4) therefore we should look for evidence that the brain is a quantum computer.

    Argument #1 really makes no sense. Humans believing silly things doesn’t prove human decisions aren’t computable. Just look at AI. It is obviously computable and hallucinates nonsense all the time. This dubious argument means that #3 doesn’t follow; there is no good reason to think consciousness and “collapse” are related.

    Argument #2 is problematic because physical collapse models are not compatible with special relativity or the statistical predictions of non-relativistic quantum mechanics, and so they cannot reproduce the predictions of quantum field theory in all cases, and so they aren’t particularly popular among physicists, and of course there is no evidence for them. Most physicists see the “collapse” as an epistemic, not a physical, event.

    Orch OR also arbitrarily insists on using the Diósi–Penrose model specifically, even though there have been multiple models of physical collapse proposed, such as GRW. There is no obvious reason to use this model specifically, it isn’t connected to any of the premises in the theory. Luckily, argument #2 does present falsifiable claims, but because #2 is not logically connected to the rest of the arguments, even if we do prove that the Diósi–Penrose model is correct, it doesn’t follow that #1, #3, or #4 are correct. We would just know there are physical collapses, but nothing else in the theory would follow.

    The only other argument that propose something falsifiable is #4, but again, #4 is not connected to #1, #3, or #4. Even if you desperately searched around frantically for any evidence that the brain is a quantum computer, and found some, that would just be your conclusion: the brain is a quantum computer. From that, #1, #2, and #3 do not then follow. It would just be an isolated fact in and of itself, an interesting discovery but wouldn’t validate the theory. I mean, we already have quantum computers, if you think collapse = consciousness, then you would have to already think quantum computers are conscious. A bizarre conclusion.

    In fact, only #2 and #4 are falsifiable, but even if both #2 and #4 are validated, it doesn’t get you to #1 or #3, so the theory as a whole still would remain unvalidated. It is ultimately an unfalsifiable theory but with falsifiable subcomponents. The advocates insist we should focus on the subcomponents as proof it’s a scientific theory because “it’s falsifiable,” but the theory as a whole simply is not falsifiable.

    Also, microtubules are structural. They don’t play any role in information processing in the brain, just in binding cells together, but it’s not just brain cells, microtubules are something found throughout your body in all kinds of cells. There is no reason to think at all they play any role in computations in the brain. The only reason you see interest in them from the Orch OR “crowd” (it’s like, what, 2 people who just so happen to be very loud?) is because they’re desperate for anything that vaguely looks like quantum effects in the brain, and so far microtubules are the only things that seem quantum effects may play some role, but this role is again structural. There is no reason to believe it plays any role in information processing or cognition.


  • I think a lot of proponents of objective collapse would pick a bone with that, haha, although it’s really just semantics. They are proposing extra dynamics that we don’t understand and can’t yet measure.

    Any actual physicist would agree objective collapse has to modify the dynamics, because it’s unavoidable when you introduce an objective collapse model and actually look at the mathematics. No one in the physics community would debate GRW or the Diósi–Penrose model technically makes different predictions, however, and in fact the people who have proposed these models often view this as a positive thing since it makes it testable rather than just philosophy.

    How the two theories would deviate would depend upon your specific objective collapse model, because they place thresholds in different locations. For GRW, it is based on a stochastic process that increases with probability over time, rather than a sharp threshold, but you still should see statistical deviations between its predictions and quantum mechanics if you can maintain a coherent quantum state for a large amount of time. The DP model has something to do with gravity, which I do not know enough to understand it, but I think the rough idea is if you have sufficient mass/energy in a particular locality it will cause a “collapse,” and so if you can conduct an experiment where that threshold of mass/energy is met, traditional quantum theory would predict the system could still be coherent whereas the DP model would reject that, and so you’d inherently end up with deviations in the predictions.

    What’s the definition of interact here?

    An interaction is a local event where two systems become correlated with one another as a result of the event.

    “The physical process during which O measures the quantity q of the system S implies a physical interaction between O and S. In the process of this interaction, the state of O changes…A quantum description of the state of a system S exists only if some system O (considered as an observer) is actually ‘describing’ S, or, more precisely, has interacted with S…It is possible to compare different views, but the process of comparison is always a physical interaction, and all physical interactions are quantum mechanical in nature.”

    The term “observer” is used very broadly in RQM and can apply to even a single particle. It is whatever physical system you are choosing as the basis of a coordinate system to describe other systems in relation to.

    Does it have an arbitrary cutoff like in objective collapse?

    It has a cutoff but not an arbitrary cutoff. The cutoff is in relation to whatever system participates in an interaction. If you have a system in a superposition of states, and you interact with it, then from your perspective, it is cutoff, because the system now has definite, real values in relation to you. But it does not necessarily have definite, real values in relation to some other isolated system that didn’t interact at all.

    You can make a non-separable state as big as you want.

    Only in relation to things not participating in the interaction. The moment something enters into participation, the states become separable. Two entangled particles are nonseparable up until you interact with them. Although, even for the two entangled particles, from their “perspectives” on each other, they are separable. It is only nonseparable from the perspective of yourself who has not interacted with them yet. If you interact with them, an additional observer who has not interacted with you or the three particles yet may still describe all three of you in a nonseparble entangled state, up until they interact with it themselves.

    This is also the first I’ve heard anything about time-symmetric interpretations. That sounds pretty fascinating. Does it not have experimenter “free will”, or do they sidestep the no-go theorems some other way?

    It violates the “free will” assumption because there is no physical possibility of setting up an experiment where the measurement settings cannot potentially influence the system if you take both the time-forwards and time-reverse evolution seriously. We tend to think because we place the measurement device after the initial preparation and that causality only flows in a single time direction, then it’s possible for the initial preparation to affect the measurement device but impossible for the measurement device to affect the initial preparation. But this reasoning doesn’t hold if you drop the postulate of the arrow of time, because in the time-reverse, the measurement interaction is the first interaction in the causal chain and the initial preparation is the second.

    Indeed, every single Bell test, if you look at its time-reverse, is unambiguously local and easy to explain classically, because all the final measurements are brought to a single locality, so in the time-reverse, all the information needed to explain the experiment begins in a single locality and evolves towards the initial preparation. Bell tests only appear nonlocal in the time-forwards evolution, and if you discount the time-reverse as having any sort of physical reality, it then forces you to conclude it must either be nonlocal or a real state for the particles independent of observation cannot exist. But if you drop the postulate of the arrow of time, this conclusion no longer follows, although you do end up with genuine retrocausality (as opposed to superdeterminism which only gives you pseudo-retrocausality), so it’s not like it gives you a classical system.

    So saying we stick with objective collapse or multiple worlds, what I mean is, could you define a non-Lipschitz continuous potential well (for example) that leads to multiple solutions to a wave equation given the same boundary?

    I don’t know, but that is a very interesting question. If you figure it out, I would be interested in the answer.


  • Many of the interpretations of quantum mechanics are nondeterministic.

    1. Relational quantum mechanics interprets particles as taking on discrete states at random whenever they interact with another particle, but only in relation to what they interact with and not in relation to anything else. That means particles don’t have absolute properties, like, if you measure its spin to be +1/2, this is not an absolute property, but a property that exists only relative to you/your measuring device. Each interaction leads to particles taking on definite states randomly according to the statistics predicted by quantum theory, but only in relation to things participating in those interactions.

    2. Time-symmetric interpretations explain violations of Bell inequalities through rejecting a fundamental arrow of time. Without it, there’s no reason to evolve the state vector in a single time-direction. It thus adopts the Two-State Vector Formalism which evolves it in both directions simultaneously. When you do this, you find it places enough constructs on the particles give you absolutely deterministic values called weak values, but these weak values are not what you directly measure. What you directly measure are the “strong” values. You can interpret it such that every time two particles interact, they take on “strong” values randomly according to a rule called the Aharonov-Bergmann-Lebowitz rule. This makes time-symmetric interpretations local realist but not local deterministic, as it can explain violations of Bell inequalities through local information stored in the particles, but that local information still only statistically determines what you observe.

    3. Objective collapse models are not really interpretations but new models because they can’t universally reproduce the mathematics of quantum theory, but some serious physicists have explored them as possibilities and they are also fundamentally random. You assume that particles literally spread out as waves until some threshold is met then they collapse down randomly into classical particles. The reason this can’t reproduce the mathematics of quantum theory is because this implies quantum effects cannot be scaled beyond whatever that threshold is, but no such threshold exists in traditional quantum mechanics, so such a theory must necessarily deviate from its predictions at that threshold. However, it is very hard to scale quantum effects to large scales, so if you place the threshold high enough, you can’t practically distinguish it from traditional quantum mechanics.



  • Many worlds theories are rather strange.

    If you take quantum theory at face value without trying to modifying it in any way, then you unequivocally run into the conclusion that ψ is contextual, that is to say, what ψ you assign to a system depends upon your measurement context, your “perspective” so to speak.

    This is where the “Wigner’s friend paradox” arises. It’s not really a “paradox” as it really just shows ψ is contextual. If Wigner and his friend place a particle in a superposition of states, his friend says he will measure it, and then Wigner steps out of the room for a moment when he is measuring it, from the friend’s perspective he would reduce ψ to an eigenstate, whereas in Wigner’s perspective ψ would instead remain in a superposition of states but one entangled with the measuring device.

    This isn’t really a contradiction because in density matrix form Wigner can apply a perspective transformation and confirm that his friend would indeed perceive an eigenstate with certain probabilities for which one they would perceive given by the Born rule, but it does illustrate the contextual nature of quantum theory.

    If you just stop there, you inevitably fall into relational quantum mechanics. Relational quantum mechanics just accepts the contextual nature of ψ and tries to make sense of it within the mathematics itself. Most other “interpretations” really aren’t even interpretations but sort of try to run away from the conclusion, such as significantly modifying the mathematics and even statistical predictions in order to introduce objective collapse or hidden variables in order to either get rid of a contextual ψ or get rid of ψ as something fundamental altogether.

    Many Worlds is still technically along these lines because it does add new mathematics explicitly for the purpose of avoiding the conclusion of irreducible contextuality, although it is the most subtle modification and still reproduces the same statistical predictions. If we go back to the Wigner’s friend scenario, Wigner’s friend reduced ψ relative to his own context, but Wigner, who was isolated from the friend and the particle, did not reduce ψ by instead described them as entangled.

    So, any time you measure something, you can imagine introducing a third-party that isn’t physically interacting with you or the system, and from that third party’s perspective you would be in an entangled superposition of states. But what about the physical status of the third party themselves? You could introduce a fourth party that would see the system and the third party in an entangled superposition of states. But what about the fourth party? You could introduce a fifth party… so on and so forth.

    You have an infinite regress until, at some how (somehow), you end up with Ψ, which is a sort of “view from nowhere,” a perspective that contains every physical object, is isolated from all those physical objects, and is itself not a physical object, so it can contain everything. So from the perspective of this big Ψ, everything always remains in a superposition of states forever, and all the little ψ are only contextual because they are like perspectival slices within Ψ.

    You cannot derive Ψ mathematically because there is no way to get from inherently contextual ψ to this preferred nonphysical perspective Ψ, so you cannot know its mathematical properties. There is also no way to define it, because each ψ is an element of Hilbert space and Hilbert space is a constructed space, unlike background spaces like Minkowski space. The latter are defined independently of the objects the contain, whereas the former are defined in terms of the objects they contain. That means for two different physical systems, you will have two different ψ that will be assigned to two different Hilbert spaces. The issue is that you cannot define the Hilbert space that Ψ is part of because it would require knowing everything in the universe.

    Hence, Ψ cannot be derived nor defined, so it can only be vaguely postulated, and its mathematical properties also have to be postulated as you cannot derive them from anything. It is just postulated to be this privileged cosmic perspective, a sort of godlike ethereal “view from nowhere,” and then it is postulated to have the same mathematical properties as ψ but that all ψ are also postulated to be subsystems of Ψ. You can then write things down like how a partial trace on Ψ can give you information about any perspective of its subsystems, but only because it was defined to have those properties. It is true by definition.

    In a RQM perspective it just takes quantum theory at face value without bothering to introduce a Ψ and just accepts that ψ is contextual. Talking about a non-contextual (absolute) ψ makes about as much sense as talking about non-contextual (absolute) velocity, and talking about a privileged perspective in QM makes about as much sense as talking about a privileged perspective in special relativity. For some reason, people are perfectly happy with accepting the contextual nature of special relativity, but they struggle real hard with the contextual nature of quantum theory, and feel the need to modify it, to the point of convincing themselves that there is a multiverse in order to escape it.


  • That’s literally China’s policies. The problem is most westerners are lied to about China’s model and it is just painted it as if Deng Xiaoping was an uber capitalist lover and turned China into a free market economy and that was the end of history.

    The reality is that Deng Xiaoping was a classical Marxist so he wanted China to follow the development path of classical Marxism (grasping the large, letting go of the small) and not the revision of Marxism by Stalin (nationalizing everything), because Marxian theory is about formulating a scientific theory of socioeconomic development, so if they want to develop as rapidly as possible they needed to adhere more closely to Marxian economics.

    Deng also knew the people would revolt if the country remained poor for very long, so they should hyper-focus on economic development first-of-foremost at all costs for a short period of time. Such a hyper-focus on development he had foresight to predict would lead to a lot of problems: environmental degradation, rising wealth inequality, etc. So he argued that this should be a two-step development model. There would be an initial stage of rapid development, followed by a second stage of shifting to a model that has more of a focus on high quality development to tackle the problems of the previous stage once they’re a lot wealthier.

    The first stage went from Deng Xiaoping to Jiang Zemin, and then they announced they were entering the second phase under Hu Jintao and this has carried onto the Xi Jinping administration. Western media decried Xi an “abandonment of Deng” because western media is just pure propaganda when in reality this was Deng’s vision. China has switched to a model that no longer prioritizes rapid growth but prioritizes high quality growth.

    One of the policies for this period has been to tackle the wealth inequality that has arisen during the first period. They have done this through various methods but one major one is huge poverty alleviation initiatives which the wealthy have been required to fund. Tencent for example “donated” an amount worth 3/4th of its whole yearly profits to government poverty alleviation initiatives. China does tax the rich but they have a system of unofficial “taxation” as well where they discretely take over a company through a combination of party cells and becoming a major shareholder with the golden share system and then make that company “donate” its profits back to the state. As a result China’s wealth inequality has been gradually falling since 2010 and they’ve become the #1 funder of green energy initiatives in the entire world.

    The reason you don’t see this in western countries is because they are capitalist. Most westerners have an mindset that laws work like magic spells, you can just write down on a piece of paper whatever economic system you want and this is like casting a spell to create that system as if by magic, and so if you just craft the language perfectly to get the perfect spell then you will create the perfect system.

    The Chinese understand this is not how reality works, economic systems are real physical machines that continually transform nature into goods and services for human conception, and so whatever laws you write can only meaningfully be implemented in reality if there is a physical basis for them.

    The physical basis for political power ultimately rests in production relations, that is to say, ownership and control over the means of production, and thus the ability to appropriate all wealth. The wealth appropriation in countries like the USA is entirely in the hands of the capitalist class, and so they use that immense wealth, and thus political power, to capture the state and subvert it to their own interests, and thus corrupt the state to favor those very same capital interests rather than to control them.

    The Chinese understand that if you want the state to remain an independent force that is not captured by the wealth appropriators, then the state must have its own material foundations. That is to say, the state must directly control its own means of production, it must have its own basis in economic production as well, so it can act as an independent economic force and not wholly dependent upon the capitalists for its material existence.

    Furthermore, its economic basis must be far larger and thus more economically powerful than any other capitalist. Even if it owns some basis, if that basis is too small it would still become subverted by capitalist oligarchs. The Chinese state directly owns and controls the majority of all its largest enterprises as well as has indirect control of the majority of the minority of those large enterprises it doesn’t directly control. This makes the state itself by far the largest producer of wealth in the whole country, producing 40% of the entire GDP, no singular other enterprise in China even comes close to that.

    The absolute enormous control over production allows for the state to control non-state actors and not the other way around. In a capitalist country the non-state actors, these being the wealth bourgeois class who own the large enterprises, instead captures the state and controls it for its own interests and it does not genuinely act as an independent body with its own independent interests, but only as the accumulation of the average interests of the average capitalist.

    No law you write that is unfriendly to capitalists under such a system will be sustainable, and often are entirely non-enforceable, because in capitalist societies there is no material basis for them. The US is a great example of this. It’s technically illegal to do insider trading, but everyone in US Congress openly does insider trading, openly talks about it, and the records of them getting rich from insider training is pretty openly public knowledge. But nobody ever gets arrested for it because the law is not enforceable because the material basis of US society is production relations that give control of the commanding heights of the economy to the capitalist class, and so the capitalists just buy off the state for their own interests and there is no meaningfully competing power dynamic against that in US society.


  • China does tax the rich but they also have an additional system of “voluntary donations.” For example, Tencent “volunteered” to give up an amount that is about 3/4th worth of its yearly profits to social programs.

    I say “voluntary” because it’s obviously not very voluntary. China’s government has a party cell inside of Tencent as well as a “golden share” that allows it to act as a major shareholder. It basically has control over the company. These “donations” also go directly to government programs like poverty alleviation and not to a private charity group.


  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlAmericans and socialism
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 months ago

    I have the rather controversial opinion that the failure of communist parties doesn’t come down the the failure of crafting the perfect rhetoric or argument in the free marketplace of ideas.

    Ultimately facts don’t matter because if a person is raised around thousands of people constantly telling them a lie and one person telling them the truth, they will believe the lie nearly every time. What matters really is how much you can propagate an idea rather than how well crafted that idea is.

    How much you can propagate an idea depends upon how much wealth you have to buy and control media institutions, and how much wealth you control depends upon your relations to production. I.e. in capitalist societies capitalists control all wealth and thus control the propagation of ideas, so arguing against them in the “free marketplace of ideas” is ultimately always a losing battle. It is thus pointless to even worry too much about crafting the perfect and most convincing rhetoric.

    Control over the means of production translates directly to political influence and power, yet communist parties not in power don’t control any, and thus have no power. Many communist parties just hope one day to get super lucky to take advantage of a crisis and seize power in a single stroke, and when that luck never comes they end up going nowhere.

    Here is where my controversial take comes in. If we want a strategy that is more consistently successful it has to rely less on luck meaning there needs to be some sort of way to gradually increase the party’s power consistently without relying on some sort of big jump in power during a crisis. Even if there is a crisis, the party will be more positioned to take advantage of it if it has already gradually built up a base of power.

    Yet, if power comes from control over the means of production, this necessarily means the party must make strides to acquire means of production in the interim period before revolution. This leaves us with the inevitable conclusion that communist parties must engage in economics even long prior to coming to power.

    The issue however is that to engage in economics in a capitalist society is to participate in it, and most communists at least here in the west see participation as equivalent to an endorsement and thus a betrayal of “communist principles.”

    The result of this mentality is that communist parties simply are incapable of gradually increasing their base of power and their only hope is to wait for a crisis for sudden gains, yet even during crises their limited power often makes it difficult to take advantage of the crisis anyways so they rarely gain much of anything and are always stuck in a perpetual cycle of being eternal losers.

    Most communist parties just want to go from zero to one-hundred in a single stroke which isn’t impossible but it would require very prestine conditions and all the right social elements to align perfectly. If you want a more consistent strategy of getting communist parties into power you need something that doesn’t rely on such a stroke of luck, any sort of sudden leap in the political power of the party, but is capable of growing it gradually over time. This requires the party to engage in economics and there is simply no way around this conclusion.


  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlAmericans and socialism
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    6 months ago

    You people have good luck with this? I haven’t. I don’t find that you can just “trick” people into believing in socialism by changing the words. The moment if becomes obvious you’re criticizing free markets and the rich and advocating public ownership they will catch on.


  • There is no action at a distance in quantum mechanics, that is a laymen’s misconception. If there was, it would not be compatible with special relativity, but it is compatible as they are already unified under the framework of quantum field theory. The No-communication theorem is a rather simple proof that shows there is no “sharing at a distance” in quantum mechanics. It is an entirely local theory. The misconception arises from people misinterpreting Bell’s theorem which says quantum mechanics is not compatible with a local hidden variable theory, so people falsely conclude it’s a nonlocal theory, but this is just false because quantum mechanics is not a hidden variable theory, and so it is not incompatible with locality. It is a local theory. Bell’s theorem only shows it is nonlocal if you introduce hidden variables, meaning the theorem is really only applicable to a potential replacement to quantum mechanics and is not even applicable to quantum mechanics itself. It is applicable to things like pilot wave theory, but not to quantum theory.



  • I always think articles like this are incredibly stupid, honestly. Political parties exist to push a particular ideology, not to win elections. If the communist party abandoned communism and became a neonazi party to win the election, and they did succeed in winning, did the communist party really “win”? Not really. If you have to abandon your ideology to win then you did not win.

    It’s pretty rare for parties to actually abandon their ideology like that. The job of a political party is not to merely win, but to convince the population that their ideology is superior so people will back them. They want to win, yes, but under the conditions that they have won because the people back their message so that they can implement it.

    This is why I always find it incredibly stupid when I see all these articles and progressive political commentators saying that the Democrats are a stupid party for not shifting their rhetoric to be more pro-working class, to be anti-imperialist, etc. THE DEMOCRATS ARE NOT A WORKING CLASS PARTY. It would in fact be incredibly stupid for them to shift to be more left because doing so would abandon their values. The Democrats’ values are billionaires, free market capitalism, and imperialism. These are not “stupid” decisions they’re making for supporting these things, THESE ARE THE FUNDAMENTAL BELIEFS OF THE PARTY.

    In normal countries if you dislike a party’s ideology, you support a different party. But Americans have this weird fantasy that Democrats should just be “reasonable” and entirely abandon their core values to back their own values, and so they refuse to ever back a different party because of this ridiculous delusion. Whenever the Democrats fail to adopt working-class values, they run these stupid headlines saying the Democrats are being “unreasonable” or “stupid” or have “bad strategy” or are “incompetents” or whatever and “just don’t want to fight.”

    Literally none of that is true. The Democrats are extremely fierce fighters when it comes to defending imperialism and the freedoms of billionaires. They aren’t fighting for your values because those are not their values, and so you should back a different party.


  • On the surface, it does seem like there is a similarity. If a particle is measured over here and later over there, in quantum mechanics it doesn’t necessarily have a well-defined position in between those measurements. You might then want to liken it to a game engine where the particle is only rendered when the player is looking at it. But the difference is that to compute how the particle arrived over there when it was previously over here, in quantum mechanics, you have to actually take into account all possible paths it could have taken to reach that point.

    This is something game engines do not do and actually makes quantum mechanics far more computationally expensive rather than less.


  • Any time you do something to the particles on Earth, the ones on the Moon are affected also

    The no-communication theorem already proves that manipulating one particle in an entangled pair has no impact at al on another. The proof uses the reduced density matrices of the particles which capture both their probabilities of showing up in a particular state as well as their coherence terms which capture their ability to exhibit interference effects. No change you can make to one particle in an entangled pair can possibly lead to an alteration of the reduced density matrix of the other particle.


  • You have not made any point at all. Your first reply to me entirely ignored the point of my post which you did not read followed with an attack, I reply pointing out you ignored the whole point of my post and just attacked me without actually respond to it, and now you respond again with literally nothing of substance at all just saying “you’re wrong! touch grass! word salad!”

    You have nothing of substance to say, nothing to contribute to the discussion. You are either a complete troll trying to rile me up, or you just have a weird emotional attachment to this topic and felt an emotional need to respond and attack me prior to actually thinking up a coherent thing to criticize me on. Didn’t your momma ever teach you that “if you have nothing positive or constructive to say, don’t say anything at all”? Learn some manners, boy. Blocked.


  • They are incredibly efficient for short-term production, but very inefficient for long-term production. Destroying the environment is a long-term problem that doesn’t have immediate consequences on the businesses that engage in it. Sustainable production in the long-term requires foresight, which requires a plan. It also requires a more stable production environment, i.e. it cannot be competitive because if you are competing for survival you will only be able to act in your immediate interests to avoid being destroyed in the competition.

    Most economists are under a delusion known as neoclassical economics which is literally a nonphysical theory that treats the basis of the economy as not the material world we actually live in but abstract human ideas which are assumed to operate according to their own internal logic without any material causes or influences. They then derive from these imagined “laws” regarding human ideas (which no one has ever experimentally demonstrated but were just invented in some economists’ armchair one day) that humans left to be completely free to make decisions without any regulations at all will maximize the “utils” of the population, making everyone as happy as possible.

    With the complete failure of this policy leading to the US Great Depression, many economists recognized this was flawed and made some concessions, such as with Keynesianism, but they never abandoned the core idea. In fact, the core idea was just reformulated to be compatible with Keynesianism in what is called the neoclassical synthesis. It still exists as a fundamental belief to most every economist that completely unregulated market economy without any plan at all will automagically produce a society with maximal happiness, and while they will admit some caveats to this these days (such as the need for a central organization to manage currency in Keynesianism), these are treated as an exception and not the rule. Their beliefs are still incompatible with long-term sustainable planning because in their minds the success of markets from comes util-maximizing decisions built that are fundamental to the human psyche and so any long-term plan must contradict with this and lead to a bad economy that fails to maximize utils.

    The rise of Popperism in western academia has also played a role here. A lot of material scientists have been rather skeptical of the social sciences and aren’t really going to take arguments like those based in neoclassical economics which is based largely in mysticism about human free will seriously, and so a second argument against long-term planning was put forward by Karl Popper which has become rather popular in western academia. Popper argued that it is impossible to learn from history because it is too complicated with too many variables and you cannot control them all. You would need a science that studies how human societies develop in order to justify a long-term development plan into the future, but if it’s impossible to study them to learn how they develop because they are too complicated, then it is impossible to have such a science, and thus impossible to justify any sort of long-term sustainable development plan. It would always be based on guesswork and so more likely to do more harm than good. Popper argued that instead of long-term development plans, the state should instead be purely ideological, what he called an “open society” operating purely on the ideology of liberalism rather getting involved in economics.

    As long as both neoclassical economics and Popperism are dominate trends in western academia there will never be long-term sustainable planning because they are fundamentally incompatible ideas.


  • You did not read what I wrote, so it is unironic you call it “word salad” when you are not even aware of the words I wrote since you had an emotional response and wrote this reply without actually addressing what I argued. I stated that it is impossible to have an very large institution without strict rules that people follow, and this requires also the enforcement of the rules, and that means a hierarchy as you will have rule-enforcers.

    Also, you are insisting your personal definition of anarchism is the one true definition that I am somehow stupid for disagreeing with, yet anyone can just scroll through the same comments on this thread and see there are other people disagreeing with you while also defending anarchism. A lot of anarchists do not believe anarchism means “no hierarchy,” like, seriously, do you unironically believe in entirely abolishing all hierarchies? Do you think a medical doctor should have as much authority on how to treat an injured patient as the janitor of the same hospital? Most anarchists aren’t even “no hierarchy” they are “no unjustified hierarchy.”

    The fact you are entirely opposed to hierarchy makes your position even more silly than what I was criticizing.


  • All libertarian ideologies (including left and right wing anarchism) are anti-social and primitivist.

    It is anti-social because it arises from a hatred of working in a large groups. It’s impossible to have any sort of large-scale institution without having rules that people want to follow, and libertarian ideology arises out of people hating to have to follow rules, i.e. to be a respectable member of society, i.e. they hate society and don’t want to be social. They thus desire very small institutions with limited rules and restrictions. Right-wing libertarians envision a society dominated by small private businesses while left-wing libertarians imagine a society dominated by either small worker-cooperative, communes, or some sort of community council.

    Of course, everyone of all ideologies opposes submitting to hierarchies they find unjust, but hatred of submitting to hierarchies at all is just anti-social, as any society will have rules, people who write the rules, people who enforce the rules. It is necessary for any social institution to function. It is part of being an adult and learning to live in a society to learn to obey the rules, such as traffic rules. Sometimes it is annoying or inconvenient, but you do it because you are a respectable member of society and not a rebellious edgelord who makes things harder on everyone else because they don’t obey basic rules.

    It is primitivist because some institutions simply only work if they are very large. You cannot have something like NASA that builds rocket ships operated by five people. You are going to always need an enormous institution which will have a ton of people, a lot of different levels of command (“hierarchy”), strict rules for everyone to follow, etc. If you tried to “bust up” something like NASA or SpaceX to be small businesses they simply would lose their ability to build rocket ships at all.

    Of course, anarchists don’t mind, they will say, “who cares about rockets? They’re not important.” It reminds me of the old meme that spread around where someone asked anarchists how their tiny communes would be able to organize current massive supply chains in our modern societies and they responded by saying that the supply chain would be reduced to just people growing beans in their backyard and eating it, like a feudal peasant. They won’t even defend that their system could function as well as our modern economy but just says modern marvels of human engineering don’t even matter, because they are ultimately primitivists at heart.

    I never understood the popularity of libertarian and anarchist beliefs in programming circles. We would never have entered the Information Age if we had an anarchism or libertarian system. No matter how much they might pretend these are the ideal systems, they don’t even believe it themselves. If a libertarian has a serious medical illness, they are either going to seek medical help at a public hospital or a corporate hospital. Nobody is going to seek medical help at a “hospital small business” ran out of someone’s garage. We all intuitively and implicitly understand that large swathes of economy that we all take advantage of simply cannot feasibly be ran by small organizations, but libertarians are just in denial.


  • Anarchism thus becomes meaningless as anyone who defends certain hierarchies obviously does so because they believe they are just. Literally everyone on earth is against “unjust hierarchies” at least in their own personal evaluation of said hierarchies. People who support capitalism do so because they believe the exploitative systems it engenders are justifiable and will usually immediately tell you what those justifications are. Sure, you and I might not agree with their argument, but that’s not the point. To say your ideology is to oppose “unjust hierarchies” is to not say anything at all, because even the capitalist, hell, even the fascist would probably agree that they oppose “unjust hierarchies” because in their minds the hierarchies they promote are indeed justified by whatever twisted logic they have in their head.

    Telling me you oppose “unjust hierarchies” thus tells me nothing about what you actually believe, it does not tell me anything at all. It is as vague as saying “I oppose bad things.” It’s a meaningless statement on its own without clarifying what is meant by “bad” in this case. Similarly, “I oppose unjust hierarchies” is meaningless statement without clarifying what qualifies “just” and “unjust,” and once you tell me that, it would make more sense you label you based on your answer to that question. Anarchism thus becomes a meaningless word that tells me nothing about you. For example, you might tell me one unjust hierarchy you want to abolish is prison. It would make more sense for me to call you a prison abolitionist than an anarchist since that term at least carries meaning, and there are plenty of prison abolitionists who don’t identify as anarchist.


  • quantum nature of the randomly generated numbers helped specifically with quantum computer simulations, but based on your reply you clearly just meant that you were using it as a multi-purpose RNG that is free of unwanted correlations between the randomly generated bits

    It is used as the source of entropy for the simulator. Quantum mechanics is random, so to actually get the results you have to sample it. In quantum computing, this typically involves running the same program tens of thousands of times, which are called “shots,” and then forming a distribution of the results. The sampling with the simulator uses the QRNG for the source of entropy, so the sampling results are truly random.

    Out of curiosity, have you found that the card works as well as advertised? I ask because it seems to me that any imprecision in the design and/or manufacture of the card could introduce systematic errors in the quantum measurements that would result in correlations in the sampled bits, so I am curious if you have been able to verify that is not something to be concerned about.

    I have tried several hardware random number generators and usually there is no bias either because they specifically designed it not to have a bias or they have some level of post-processing to remove the bias. If there is a bias, it is possible to remove the bias yourself. There are two methods that I tend to use that depends upon the source of the bias.

    To be “random” simply means each bit is statistically independent of each other bit, not necessarily that the outcome is uniform, i.e. 50% chance of 0 and 50% chance of 1. It can still be considered truly random with a non-uniform distribution, such as 52% chance of 0 and 48% chance of 1, as long as each successive bit is entirely independent of any previous bit, i.e. there is no statistical analysis you could ever perform on the bits to improve your chances of predicting the next one beyond the initial distribution of 52%/48%.

    In the case where it is genuinely random (statistical independence) yet is non-uniform (which we can call nondeterministic bias), you can transform it into a uniform distribution using what is known as a von Neumann extractor. This takes advantage of a simple probability rule for statistically independent data whereby Pr(A)Pr(B)=Pr(B)Pr(A). Let’s say A=0 and B=1, then Pr(0)Pr(1)=Pr(1)Pr(0). That means you can read two bits at a time rather than one and throw out all results that are 00 and 11 and only keep results that are 01 or 10, and then you can map 01 to 0 and 10 to 1. You would then be mathematically guaranteed that the resulting distribution of bits are perfectly uniform with 50% chance of 0 and 50% chance of 1.

    I have used this method to develop my own hardware random number generator that can pull random numbers from the air, by analyzing tiny fluctuations in electrical noise in your environment using an antenna. The problem is that electromagnetic waves are not always hitting the antenna, so there can often be long strings of zeros, so if you set something up like this, you will find your random numbers are massively skewed towards zero (like 95% chance of 0 and 5% chance of 1). However, since each bit still is truly independent of the successive bit, using this method will give you a uniform distribution of 50% 0 and 50% 1.

    Although, one thing to keep in mind is the bigger the skew, the more data you have to throw out. With my own hardware random number generator I built myself that pulls the numbers from the air, it ends up throwing out the vast majority of the data due to the huge bias, so it can be very slow. There are other algorithms which throw out less data but they can be much more mathematically complicated and require far more resources.

    In the cases where it may not be genuinely random because the bias is caused by some imperfection in the design (which we can call deterministic bias), you can still uniformly distribute the bias across all the bits so that not only would be much more difficult to detect the bias, but you will still get uniform results. The way to do this is to take your random number and XOR it with some data set that is non-random but uniform, which you can generate from a pseudorandom number generator like the C’s rand() function.

    This will not improve the quality of the random numbers because, let’s say if it is biased 52% to 48% but you use this method to de-bias it so the distribution is 50% to 50%, if someone can predict the next value of the rand() function that would increase their ability to make a prediction back to 52% to 48%. You can make it more difficult to do so by using a higher quality pseudorandom number generator like using something like AES to generate the pseudorandom numbers. NIST even has standards for this kind of post-processing.

    But ultimately using this method is only obfuscation, making it more and more difficult to discover the deterministic bias by hiding it away more cleverly, but does not truly get rid of it. It’s impossible to take a random data set with some deterministic bias and trulyget rid of the deterministic bias purely through deterministic mathematical transformations,. You can only hide it away very cleverly. Only if the bias is nondeterministic can you get rid of it with a mathematical transformation.

    It is impossible to reduce the quality of the random numbers this way. If the entropy source is truly random and truly non-biased, then XORing it with the C rand() function, despite it being a low-quality pseudorandom number generator, is mathematically guaranteed to still output something truly random and non-biased. So there is never harm in doing this.

    However, in my experience if you find your hardware random number generator is biased (most aren’t), the bias usually isn’t very large. If something is truly random but biased so that there is a 52% chance of 0 and 48% chance of 1, this isn’t enough of a bias to actually cause much issues. You could even use it for something like cryptography and even if someone does figure out the bias, it would not increase their ability to predict keys enough to actually put anything at risk. If you use a cryptographysically secure pseudorandom number generator (CSPRNG) in place of something like C rand(), they will likely not be able to discover the bias in the first place, as these do a very good job at obfuscating the bias to the point that it will likely be undetectable.