• 1 Post
  • 58 Comments
Joined 1 year ago
cake
Cake day: August 8th, 2023

help-circle


  • I’m assuming they’re indicating that the mass below the apparatus increased in fall (when storage was filled) and decreased slowly through the winter, leading them to measure a changed graviational constant. A back of the napkin calculation shows that in order to change the measured gravitational constant by 1 %, by placing a point mass 1 m below the apparatus, that point mass would need to be about 15 000 tons. That’s not a huge number, and it’s not unlikely that their measuring equipment could measure the gravitational acceleration to much better precision than 1 %, I still think it sounds a bit unlikely.

    Remember: If we place the point mass (or equivalently, centre of mass of the coal heap) 2 m below the apparatus instead of 1 m, we need 60 000 tons to get the same effect (because gravitational force scales as inverse distance squared). To me this sounds like a fun “wandering story”, that without being impossible definitely sounds unlikely.

    For reference: The coal consumption of Luxembourg in 2016 was roughly 90 000 tons. Coal has a density of roughly 1500 kg / m3, so 15 000 tons of coal is about 10 000 m3, or a 21.5 m x 21.5 m x 21.5 m cube, or about four olympic swimming pools.

    Edit: The above density calculations use the density of coal, not the (significantly lower) density of a coal heap, which contains a lot of air in-between the coal lumps. My guess on the density of a coal heap is in the range of ≈ 1000 kg / m3 (equivalent to guessing that a coal heap has a void fraction of ≈ 1 / 3.)



  • I definitely see your point, but at the same time, isn’t public debate in text the best tool we have here for an open discussion?

    Regardless, I can understand, and respect, that you don’t want to spend time on public discussions about moderation. As “some mod somewhere” once said: If you don’t like how it’s being done, become a mod yourself. I can respect that.


  • Not even “not so bad”, I would say that as a scripting language it’s fantastic. If I’m writing any actually complex code, then static typing is much easier to work with, but if you want to hack together some stuff, python is great.

    It also interfaces extremely easily with C++ through pybind11, so for most of what I do, I end up writing the major code base in C++ and a lightweight wrapper in Python, because then you don’t have to think about anything when using the lib, just hack away in dynamically typed Python, and let your compiled C++ do the heavy lifting.








  • The most likely argument I see is that Trump severely strained diplomatic bonds both between North America and Europe and also within North America and Europe. Additionally, he heralded in a new degree of isolationist policy and created doubt about the resilience of NATO. Furthermore, he tried to blackmail the Ukrainian government.

    In summary: Not his fault directly but his politics led to a situation where Russia/Putin saw it as likely that they could invade without facing significant backlash from Europe + North America. That probably would have worked out as well if Ukraine had folded within the first couple of weeks. The argument is essentially that by convincing Russia that they could get Ukraine without significant consequences, his administration contributed to the invasion happening.

    Make of that argument what you will. Personally, I think it’s a bit of a stretch to say “Trumps fault”, but reasonable to think that another administration might have been able to deter the invasion.







  • I want to respond to your edit:

    wait for consensus before you publish, don’t publish anything that isn’t peer reviewed and replicated multiple times.

    You need to understand that publishing is the way scientists communicate among each other. Of course, all reputable journals conduct peer review before publishing, but peer review is just that: Review. The peer review process is meant to uncover obviously bad, or poorly communicated, research.

    Replication happens when other scientists read the paper and decide to replicate. In fact, by far most replication is likely never published, because it is done as a part of model/rig verification and testing. For example: If I implement a model or build an experimental rig and want to make sure I did it right, I’ll go replicate some work to test it. If I successfully replicate I’m probably not going to spend time publishing that, because I built the rig/implemented to model to do my own research. If I’m unable to replicate, I’ll first assume something is wrong with my rig/implementation. If I can rule that out (maybe by replicating something else) I might publish the new results on the stuff I couldn’t replicate.

    Consensus is built when a lot of publications agree on something, to the point where, if you aren’t able to replicate it, you can feel quite positive it’s because you’re doing something wrong.

    Basically: The idea of waiting for consensus before publishing can’t work, because consensus is formed by a bunch of people publishing. Once solid consensus is established, you’ll have a hard time getting a journal to accept an article further confirming the consensus.


  • I really don’t see the hassle… just pick one (e.g. pip/venv) and learn it in like half a day. It took college student me literally a couple hours to figure out how I could distribute a package to my peers that included compiled C++ code using pypi. The hardest part was figuring out how to cross compile the C++ lib. If you think it’s that hard to understand I really don’t know what to tell you…