This observation further compounds the hypothesis of “fun wandering story that has been told from person to person for a long time”
This observation further compounds the hypothesis of “fun wandering story that has been told from person to person for a long time”
I’m assuming they’re indicating that the mass below the apparatus increased in fall (when storage was filled) and decreased slowly through the winter, leading them to measure a changed graviational constant. A back of the napkin calculation shows that in order to change the measured gravitational constant by 1 %, by placing a point mass 1 m below the apparatus, that point mass would need to be about 15 000 tons. That’s not a huge number, and it’s not unlikely that their measuring equipment could measure the gravitational acceleration to much better precision than 1 %, I still think it sounds a bit unlikely.
Remember: If we place the point mass (or equivalently, centre of mass of the coal heap) 2 m below the apparatus instead of 1 m, we need 60 000 tons to get the same effect (because gravitational force scales as inverse distance squared). To me this sounds like a fun “wandering story”, that without being impossible definitely sounds unlikely.
For reference: The coal consumption of Luxembourg in 2016 was roughly 90 000 tons. Coal has a density of roughly 1500 kg / m3, so 15 000 tons of coal is about 10 000 m3, or a 21.5 m x 21.5 m x 21.5 m cube, or about four olympic swimming pools.
Edit: The above density calculations use the density of coal, not the (significantly lower) density of a coal heap, which contains a lot of air in-between the coal lumps. My guess on the density of a coal heap is in the range of ≈ 1000 kg / m3 (equivalent to guessing that a coal heap has a void fraction of ≈ 1 / 3.)
I believe it does “one pass” when it loads the code into ram, because syntax errors can be caught before anything runs. But I think the actual interpretation happens pretty much one line at a time :)
I definitely see your point, but at the same time, isn’t public debate in text the best tool we have here for an open discussion?
Regardless, I can understand, and respect, that you don’t want to spend time on public discussions about moderation. As “some mod somewhere” once said: If you don’t like how it’s being done, become a mod yourself. I can respect that.
Not even “not so bad”, I would say that as a scripting language it’s fantastic. If I’m writing any actually complex code, then static typing is much easier to work with, but if you want to hack together some stuff, python is great.
It also interfaces extremely easily with C++ through pybind11, so for most of what I do, I end up writing the major code base in C++ and a lightweight wrapper in Python, because then you don’t have to think about anything when using the lib, just hack away in dynamically typed Python, and let your compiled C++ do the heavy lifting.
That’s a compiled language, an interpreted language is translated to assembly at runtime, in pythons case: pretty much one line at a time.
Disclaimer: To the best of my knowledge, please correct me where I’m wrong.
I agree that moderators have an important job, and I appreciate the effort you put into it, but is
Subsequent comments on the topic will be deleted.
really necessary? I can’t see how it hurt to let them explain themselves in the comment section where everyone can see.
Whale can really go both ways, you have to prepare it right, then it’s really good, but if you’re not careful it’s very easy to make it dry and chewy, which it shouldn’t be.
I think you would be interested in reading a bit on the philosophy of Thomas Hobbs and “the monopoly of violence”.
A legal arrest can be violent. A soldier killing another is definitely going to be violent. Both can be legitimate uses of force.
I use a GUI (GitKraken) to easily visualise the different branches I’m working on, the state of my local vs. the remote etc. I sometimes use the gui to resolve merge conflicts. 99 % of my gut usage is command line based.
GUI’s definitely have a space, but that space is specifically doing the thing the command line is bad at: Visualising stuff.
The most likely argument I see is that Trump severely strained diplomatic bonds both between North America and Europe and also within North America and Europe. Additionally, he heralded in a new degree of isolationist policy and created doubt about the resilience of NATO. Furthermore, he tried to blackmail the Ukrainian government.
In summary: Not his fault directly but his politics led to a situation where Russia/Putin saw it as likely that they could invade without facing significant backlash from Europe + North America. That probably would have worked out as well if Ukraine had folded within the first couple of weeks. The argument is essentially that by convincing Russia that they could get Ukraine without significant consequences, his administration contributed to the invasion happening.
Make of that argument what you will. Personally, I think it’s a bit of a stretch to say “Trumps fault”, but reasonable to think that another administration might have been able to deter the invasion.
Hey, good news! The newer MacBooks (since like 2 years ago) have rolled back the Touch Bar, gotten back the ol’ reliable scissors switches and MagSafe chargers, as well as having enough ports to plug stuff in.
As for arm vs. intel, I’m a huge fan of the arm chips. My largest issue with them is that I need to cross compile for intel chips if I’m distributing an executable or a compiled library.
Hehe, han må nok belage seg på å få visum. Det er nok fortsatt en diplomat eller to som betjener ambassaden i Moskva. I forhold til det praktiske med å reise: Om ikke lenge går det nok fint å gå på ski, det går fortere enn å traske til Kirkenes til fots.
Boy do I have news for you…
I like the other response here
assembly is a gauss gun… you just have to manually align the magnets
If you write C/C++ libraries for Python you can disable the GIL
I want to respond to your edit:
wait for consensus before you publish, don’t publish anything that isn’t peer reviewed and replicated multiple times.
You need to understand that publishing is the way scientists communicate among each other. Of course, all reputable journals conduct peer review before publishing, but peer review is just that: Review. The peer review process is meant to uncover obviously bad, or poorly communicated, research.
Replication happens when other scientists read the paper and decide to replicate. In fact, by far most replication is likely never published, because it is done as a part of model/rig verification and testing. For example: If I implement a model or build an experimental rig and want to make sure I did it right, I’ll go replicate some work to test it. If I successfully replicate I’m probably not going to spend time publishing that, because I built the rig/implemented to model to do my own research. If I’m unable to replicate, I’ll first assume something is wrong with my rig/implementation. If I can rule that out (maybe by replicating something else) I might publish the new results on the stuff I couldn’t replicate.
Consensus is built when a lot of publications agree on something, to the point where, if you aren’t able to replicate it, you can feel quite positive it’s because you’re doing something wrong.
Basically: The idea of waiting for consensus before publishing can’t work, because consensus is formed by a bunch of people publishing. Once solid consensus is established, you’ll have a hard time getting a journal to accept an article further confirming the consensus.
I really don’t see the hassle… just pick one (e.g. pip/venv) and learn it in like half a day. It took college student me literally a couple hours to figure out how I could distribute a package to my peers that included compiled C++ code using pypi. The hardest part was figuring out how to cross compile the C++ lib. If you think it’s that hard to understand I really don’t know what to tell you…
Since you seem to know a lot about this: I would think that at some point the purely physical size of a device is prohibitive of using shared cache, just because the distance from a cpu to the cache can’t be too big. Do you know when this comes into play, if it does? Also, having written some multithreaded computational software, I’ve found that there’s typically (for the stuff I do) a limit to how many cores I can efficiently make use of, before the overhead of opening and closing threads eats the advantage of sharing the work between cores. What kind of “everyday” server stuff is efficiently making use of ≈300 cores? It’s clearly some set of tasks that can be done independently of one another, but do you know more specifically what kind of things people need this many cores on a server for?