(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38790597

然而,在实践中,当编程或处理数学表达式时,由于与模运算、浮点表示错误等相关的各种实现细节,它通常表现得像奇数。例如,在计算包含零的列表的平均值时,使用 在定点数表示系统中,计算出的平均值可能会错误地向上舍入而不是向下舍入。 这会导致均匀度属性明显不一致,特别是对于涉及少量迭代、低精度计算或随机化种子不足的边缘情况。 为了解决这些问题,可能需要替代技术,例如舍入到最近的值或基于上下文分析调整缩放参数。 虽然从理论上讲,零可能始终被视为偶数,但实际上,对于特定应用,根据特定标准将其视为奇数有时可能会导致更好的结果。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
4B If Statements (andreasjhkarlsson.github.io)
1319 points by r4um 1 day ago | hide | past | favorite | 435 comments










I wish I still had one of the earliest programs I wrote. It would have been 1996, I was 16, and I had a linear algebra book which had an appendix entry on computer graphics. I had taken a programming class the prior quarter.

I got hooked writing a program to draw rotating wireframes of a few different shapes. I almost failed the class because I was so distracted by this.

I didn't know about arrays yet. Every vertex was its own variable with a hardcoded default. Every entry in the rotation matrices was its own variable. The code to do matrix multiplication therefore didn't have loops, just a long list of calculations. Which had to be duplicated and adapted to every vertex.

I did know about pointers, I had to draw to the screen after all, which was done by writing to memory starting at a known address. So at least I had loops rasterizing the lines between vertices.

Which means I definitely had the concept of arrays and indexing into them, I just didn't know enough to make my own.



Same. When I was 12ish trying to write a Pac-Man game in basic, I dreaded writing the logic for the 4 ghosts. I'd need separate code for the ghosts at (x1,y1)....(x4,y4) and said to my dad it would be cool if I could use xn and yn inside a for loop, but those are just 2 more variables. I wanted n to indicate which one. This seemed kind of weird, but it's what I really really wanted. He grabbed the little basic book and showed me that x(n) was actually a thing!

I think back to this in discussions about teaching. Abstract concepts are best understood when the student has a legitimate need for them. You can explain something all day and have their eyes glaze over wondering why something is important, but if it solves a problem they have it will click in seconds or minutes.



This comes up pretty often in (well-designed) video games. Say you have colored doors that can only be opened by keys of the same color. If you put the blue key on the path to the blue door, then players will never learn that you need the color of the key to match the color of the door. But if you put the blue key on a side path and have the player encounter a blue door relatively early on, then they will encounter the door first and cannot progress until they find the key. The frustration at not being able to progress is much more effective at teaching the player about the "colored keys for colored doors" mechanic than any amount of explaining you could have done.


When I started programming at 12-13 I had no idea loops or arrays existed and implemented an absolutely criminal Tic-Tac-Toe game.

The same year me and a friend were studying for the national Informatics Olympiad and we were completely stumped when faced with a problem that required a list: "How do we know how many variables to declare if we don't know the N beforehand?".

(The problem if anyone is interested: https://neps.academy/br/exercise/384. Given a list of up to 50k distinct numbers (with values up to 100k) and another list of 50k numbers that should be removed, print the resulting list. Time Limit: 1 second.)



When I first started coding, I thought loop structures were confusing and redundant when I could accomplish the same thing with GOTO. So that’s how I implemented all my loops for years until I eventually took a course and a teacher told me I was never to use GOTO again.


Dijaksta's “goto considered harmful“ is often interpreted as a much broader opinion than it was, but this is a perfect example of what he was against


Yeah Dijkstra's main objection to goto is that it's really hard to formally define its semantics and even if you do, the formalism is so complex it's hard to reason about.

Break and continue on the other hand are considerably easier to formalize.



> national Informatics Olympiad

That brought back a lot of memories... I wonder if there are participation records. The olympiad website isn't responding. I also took part in the astronomy olympiad.



The OBI website is really great, it's just been down for a couple of days. There's a good chance you'll find records there.


This seems to be a somewhat common occurrence. When I was first learning programming with python I wanted multiple balls in a pong-like game, so after some googling I ended up writing something like `globals()["ball" + i]` until someone mentioned that arrays were a thing I could use for that.


It’s a recurring thing because programing at its fundamental level is extremely simple. If you have variables, conditionals and loop/goto you can do anything. It’s pretty easy to learn the basics and get much more empowered than you had any right to be.

Armed with such power, you will build a slow, unreadable and unmaintainable mess, but you don’t know that yet, because it’s all working after all.

But as your lines multiply you begin to misstep on your own code and wonder if there’s a better way to do this. And you then stumble upon data structures, functions, classes, etc.

The self-taught road is always more or less like that.



The obvious solution here is to use the lower part of the screen as working memory while drawing the upper part. By the time you get to the bottom you shouldn't have many calculations left anyway. This has the benefit of using fast gpu ram and is therefore cuda and very ai.


If only GPUs existed back then. This would have been very slow memory mapped video ram.


It reminds me of one of my earliest ventures into freelancing, where all I had was a tiny VPS that supported PHP. I had to process a large spreadsheet which had maybe 5/10k entries in it. Small potatoes these days, compared to that time in 2002/2003.

I wasn't a computer scientist so I read the file in the dumbest way possible, but kept getting errors about memory usage and running out of space since I was looping within loops within loops. I went through it and added `$variable = null` at every opportunity. Lo and behold it worked.



I did something similar for my big middle school hit, a TI-83 version of Snake. Every x and y position of every segment of the snake went into a separate variable. The snake could only get so long because TI-83 BASIC only gives you so many variables.


The first part of GWBasic I learned by asking someone for help was "chain", after having taught myself print, input, if, and goto from the documentation.


Do you recall which shapes you created?


I remember there were 3, and that there was a cube and a pyramid. I can't recall the third. Maybe I did both a square pyramid and a triangular pyramid.


It seems over-engineered to me. Why go through all the hustle of code generation? It can be solved with a simple “for loop”:

  func isOdd(n int) bool {
    var odd bool
    for i := 0; i 
Playground link: https://go.dev/play/p/8TIfzGrdWDF

I did not profile this one yet. But my intuition, and my industry experience, tell me that this is fast.



A true production quality implementation should always use recursion.

  func isOdd(n int) bool {
    switch {
    case n == 0:
      return false
    case n > 0:
      return !isOdd(n-1)
    default:
      return !isOdd(n+1)
    }
  }


You can also use beautiful mutual recursion a la ocaml

  let rec isEven n = 
    if n = 0 then true
    else isOdd (n-1)
  and isOdd n = 
    not (isEven n)


FYI ES2015 added tail-call optimization, allowing JS to finally have an elegant isOdd function.


Only Safari supports it, though


This will cause a stack overflow. You can convert that into a loop and make your own stack on the heap if you need very deep recursion.


If you can keep the local variables down, a simple ulimit -s 60000000 should be able to get that stack limit nice and big!

You may need to call setrlimit with an appropriate RLIMIT_STACK, but I think the FFI compatibility should be able to pull that off, right?



In a language with tail-call optimization, it won't.


It will. The negations are only being applied on the way back up the call stack. The mutual recursion version in a sibling post can be made to work though.


> This will cause a stack overflow. You can convert that into a loop and make your own stack on the heap if you need very deep recursion.

Doesn't Go allocate / extend stacks using dynamic allocation rather than having a static limit (e.g. like C)?



this is truely evil :)


I can confirm that the rust version of this one is fast:

    playground::isOdd:
     testq %rdi, %rdi
     setg %al
     andb %dil, %al
     retq
(Click ... beside build to get assembly) https://play.rust-lang.org/?version=stable&mode=release&edit...

Unfortunately the go playground doesn't seem to support emitting assembly?



Shameless plug to claim Rust superiority!1

No, Go playground can not do that :(



https://godbolt.org/ supports emitting assembly for Go while Google's tools catch up!


Good point! Neither gc nor gccgo -O seem to figure this function out :(

https://godbolt.org/z/eMv41nc6Y

Proof, that you must use rust if you want blazingly fast execution of fearlessly pessimized code!



This commands an issue in the Go repo!


Don't forget the companion isEven function:

    func isEven(n int64) bool {
        return !isOdd(n)
    }


Wow you can call a function from a function ? It’ll revolutionize my productivity.


You can improve this with tail recursion.


I think this counts as a bottom-up DP solution


It will iterate infinitely for n=infinite.


Is infinite even or odd?


Ideally it should throw exception I guess.


Yes.


You, sir, are a raging psychopath. You have found a local maxima of brevity and absurdity and I, for one, salute you.


This approach is perfect for the is-even npm package[1], with 196,023 weekly downloads, or the is-odd npm package[2], with 285,501 weekly downloads. Wouldn't it be cool if you type `npm install` and it starts downloading a 40GB is-even and a 40GB is-odd?

[1] https://www.npmjs.com/package/is-even

[2] https://www.npmjs.com/package/is-odd



It's always worth mentioning that those packages stem from the effort of one dedicated npm spammer[1] to weasel his way into as many node_modules directories as possible. There's also packages for ansi-colors (no, not one package for all colors, one package for each color) and god knows what else. These then get jammed into any cli tool or other half-way sensible looking package, all referencing each other obviously - and any "real" project is just a single innocuous dependency away from having dozens of jonschlinkert packages.

[1] https://www.npmjs.com/~jonschlinkert



> Several years ago, just before my 40th birthday, I switched careers from sales, marketing and consulting to learn how to program, with the goal of making the world a better place through code [1]

It checks out, sales/consulting folks are pretty infamous for their tendency to abuse metrics. The metric here is npm downloads and Github stars.

The strategy does mean that he's _technically_ not inaccurate in claiming this on his LinkedIn -

> NASA, Microsoft, Google, AMEX, Target, IBM, Apple, Facebook, Airbus, Mercedes, Salesforce, and hundreds of thousands of other organizations depend on code I wrote to power their developer tools and consumer applications.

I encountered this type a lot in college consulting groups, it's a little funny seeing one make their way to the OSS community.

[1] https://github.com/jonschlinkert



> To date, I've created more than 1,000 open source projects

This dude is counting each one as a project hahaha



It's the truth but not the whole truth


It reminds me of graffiti tagging!


Yeah but this might make him the Patient Zero for modern tech influencing and blogspam.


Anyone know if this has spread to crates.io yet? I see plenty of name squatting, but I haven't run into real crates trying to insert themselves into everything. Namespaces are sorely needed, including some semi-official ones. candi::rand would be reasonable for candidates to enter std. Watching the battles over tokio getting into candi would be fun.


> Watching the battles over tokio getting into candi would be fun.

What would be the argument for this? It's my understanding that the lack of a default runtime is considered a strength



When I first heard of the is-odd and is-even NPM packages I was sure they were a joke, yet there we are: 200K weekly downloads! Publishing the packages may have been the effort of one spammer, but many developers obviously chose to use them - that's the part that boggles my mind.


Well, look at one of the dependents: https://www.npmjs.com/package/handlebars-helpers - certainly a more useful npm package, but by the same author. Seldom do people actually type `npm install is-even` - there's just a jungle of transitive dependencies that can be traced back to one of jonschlinkert's many packages, which then circles back to something inane as `is-even` or `ansi-red`.

I once ran a simple grep in some of my node projects - most of them had a jonschlinkert package in node_modules, certainly not through any (direct) choice of my own.



Check out this very useful utility: https://github.com/mitsuhiko/is-jonschlinkert :)


Possibly related: https://github.com/SukkaW/nolyfill vs. ljharb and nodejs 4


I recently came across this issue (https://github.com/import-js/eslint-plugin-import/issues/213... and https://github.com/import-js/eslint-plugin-import/issues/181...) and man that person has really found him self a hill.

What is a bit worrying though is that he is an active member and contributor to TC-39. Meaning that this kind of community hostility is very much alive among the people who rule JavaScript.



View their obstinance: https://github.com/nvm-sh/nvm/issues/794

8 years later and despite much support for the `.node-version` file.

Someone else started using the .node-version file years ago, and because all open source packages won't form a committee to standardize this file, nvm will not support it.

They have a lot of hills. JS Private Properties was another.



Very often when I’m digging around GitHub Issues because of some bug or quirk or insanity in the JS ecosystem (which is often) I see someone spout the worst possible take - often being kind of a jerk about it - and when I look to see who’s responsible, very very often, ljharb’s name pops up. Often.

Dogpiling on someone deep in an HN comments tree isn’t exactly the classiest thing but…never having interacted with him myself, I’ve been harbouring this low-grade antipathy towards him - nothing unhealthy, just a groan whenever I see his name on GH - for years now, and it’s cathartic and almost gratifying, given his prominence in the community, to feel seen like this. Thank you.



I was doing some snooping around and found this earlier discussion https://news.ycombinator.com/item?id=37602923 from September 2023 (234 points, 110 comments), which I had missed at the time, very related to this current subthread, and includes posts such as these (https://news.ycombinator.com/item?id=37604635).

I think we as a community really need to have a conversation about ljharb and his role in the future of our industry. If he was only a library maintainer, that would be one thing, we could just move on, find workarounds, alternatives, etc. But his involvement in TC-39 makes him one of our rulers in a non-democratic structure. That makes this different.



To be fair, the ESM switch has been botched beyond compare.

It’s like they didn’t want to become Python 2/3, and then did the absolute worst possible alternative.

It is beyond frustrating that it’s up to individual package authors whether or not their package supports ESM or CommonJS.

And yeah, it’s a pita when one of your downstream dependencies decides to go ESM only, and breaks your entire friggin chain of stuff that depends on it being CommonJS.



I agree. Mistakes were made and migration has been unnecessarily hard. This particular issue doesn’t even affect me. I simply turned off `no-unresolved`. TypeScript handles a lot better anyway (even with jsdoc type annotations).

But what is the issue here is the stubbornness of the maintainer and his unwillingness to accommodate a very sizable portion of his user base. The industry is moving on, and as a TC-39 member he should be aware of where the community is moving as well as show some empathy with his users.



I love the release notes:

> It exists.



Would it be considered bad practice or in poor taste to make pull requests in projects with the sole objective of implementing is-even natively?

Thinking of being the change I want to see in the world.



It’s probably contextual but I’d say it’s in poor taste to use them in the first place. Removing them is a net benefit imo.


I see, thanks. I guess that answers one question, but raises another: why have his packages depend on more of his packages? If his goal was to be included in as many node_modules directories as possible, and handlebars-helpers was already included what's the point of pulling in is-odd/is-even, too?


Might be this from his GitHub bio “Several years ago, just before my 40th birthday, I switched careers from sales, marketing and consulting to learn how to program” Good way to get more eyeballs…


Ah yes, that actually explains a lot! Thanks.


Makes is-odd/is-even popular; many downloads; raises their (and his) profile.


Here we are all talking about him now!


He could sell rights to the repos and disavow any knowledge of its maintenance while maintaining the link in his own repos. When those sold rights are used to commit some crime he has plausible deniability as anyone else but got a payday. If you try spinning off the subpackage just prior to a sale then it shows some sort of intent.


Is there any evidence that he has ever done anything like this, or that he plans to? Or is this just pure speculation?


I didn't declare he's done this only that it is a vulnerability of depending on those packages.


I did learn one new thing from browsing the is-odd source code: Number.isSafeInteger(n) checks that n falls within the [Number.MIN_SAFE_INTEGER, Number.MAX_SAFE_INTEGER] interval.

  ...
  if (!Number.isSafeInteger(n)) {
    throw new Error('value exceeds maximum safe integer');
  }
  ...


Often the is-even package will be imported by another (less ridiculous) package of the same author.


the bigger joke is that programming language have no built in


Huh? Did you miss the modulo operator?


It is not just modulo, it also test for whether the input is a number or integer. It also has one dependency, is-number. And isOdd(“3”) returns true.

Basically it helps dealing with all the typing problems of Javascript, and also fitting into functional programming paradigms.



If you need to know if a number is even or odd then you should convert it to an int natively anyway.


Given the lack of integers and my hesitancy to trust modulo for non-integer variables, I don't know if I would trust it. You would need to add some safety checks, but either you create an is_even/is_odd function that has safety checks, or you have to rely on developers adding in the checks anytime the number might have been in close proximity to a floating point number.

Something as simple as this can end up being neither even or odd.

    (0.1 + 0.2) * 10


It wouldn't have 200k downloads a week unless there was a large number of programmers who didn't know how to do it.

I mean we're talking FizzBuzz secret sauce here. This must be total black magic for a catastrophic percentage of programmers



A more catastrophic number of programmers a) dont know how dependencies work and b) think everyone else are idiots.


Perhaps, I solidly understand how dependencies work and in this case my observation is defensible

Someone made the decision to use that and someone thought using stuff made by a person's who makes those kinds of decisions was a good idea and so on.

You can git blame dependencies all the way down and research the parties involved. I've done it, built tools for it even.

A stack of people who make bad decisions doesn't make good software.



I have not read the source but I had always assumed that this was the lovingly crafted effort of someone who is intimately familiar with the js standard making sure that some hypothetical expression like ![1] is neither odd nor even. Surely the idea that modulo is beyond developers is too horrifying to contemplate.


Here you go:

    /*!
     * is-odd 
     *
     * Copyright (c) 2015-2017, Jon Schlinkert.
     * Released under the MIT License.
     */
    
    'use strict';
    
    const isNumber = require('is-number');
    
    module.exports = function isOdd(value) {
      const n = Math.abs(value);
      if (!isNumber(n)) {
        throw new TypeError('expected a number');
      }
      if (!Number.isInteger(n)) {
        throw new Error('expected an integer');
      }
      if (!Number.isSafeInteger(n)) {
        throw new Error('value exceeds maximum safe integer');
      }
      return (n % 2) === 1;
    };
It does some checking the `value` is an integer in the safe range, which doesn't even seem right to me. Why shouldn't you be able to call this on integers outside the save range?


All non-safe integers are even, yes?


(10e15 + 1) % 2 === 0


Sad but true. For JavaScript these kind of functions can actually be useful because of all the quirks. If that was the GPs hint then I can understand.


My favorite is the existence of both https://github.com/jonschlinkert/ansi-gray and https://github.com/jonschlinkert/ansi-grey that do the exact same thing.


Another classic from jon is the venerable https://www.npmjs.com/package/isobject coming in at 30,000,000 weekly downloads for

`function(val) { return val != null && typeof val === 'object' && Array.isArray(val) === false; }`

But honestly, having seen "TypeError: Cannot read properties of null" enough times, I give it a pass.

https://npm-stat.com/charts.html?author=jonschlinkert paints a pretty crazy picture



Seriously fuck this dude leeching on the open source community.

He's not even a technical guy but has a background i marketing and is directly trolling various Github issues :

https://news.ycombinator.com/item?id=28661094

I've always wondered why node-modules and npm required such an insane amount packages so quickly, and now i know why, people like him that use their 10.000 ridiculous packages to boost their career or do whatever self serving community destroying thing they can think off that day.

There really should be a way to ban people doing this shit.



It's more a statement about the community than him imo. Or maybe about what npm allows to be published. Nobody forced you to use is-odd, yet it's downloaded 200k a week, why? Because js' developer community doesn't know any better


Maybe someone needs to do a website in the same spirit of https://youmightnotneedjquery.com except youmightnotneedjonschlinkert.


Notice that most dependents of his tools are: 1) his own project, 2) abandoned beginner projects.

As some people mentioned, most of the usage of is-even is by tools made by this person.

But the other part is that, he made quite a few development tools for beginners that scaffold new projects and pull lots of his packages as dependencies (especially the handlebars helper), which heavily inflates the number of "Dependents" in NPM.

The other issue is that, in the past, he managed to include some of his less-useless packages in some semi-popular tools.



A lot or these downloads may be CI systems running automatic tests.


They clearly are, but this still means a not negligible amount of softwares have those packages in their dependencies.


Exactly right. Its clever way of deflecting blame that whole community should share by piling on this particular person.


This really sounds like victim-blaming. The community is vulnerable to somebody publishing asinine modules, which should be addressed. However, this individual is still the perpetrator.


Isn't the post you linked to about a different person?


I mostly write in languages without package managers so it is possible that my expectations are just wrong, but is there no way of showing your dependency graph when using npm?


Here are some popular ones. Now, in two of these cases, they can be dev-dependencies, but even there, it makes me angry, since all of those deps have to be downloaded to my machine, and can run any random code they want at install time (which is why Node projects should always be run in some kind of sandbox).

https://npm.anvaka.com/#/view/2d/jest

https://npm.anvaka.com/#/view/2d/tailwind

https://npm.anvaka.com/#/view/2d/aws-sdk



> can run any random code they want at install time (which is why Node projects should always be run in some kind of sandbox).

You can install with the --ignore-scripts flag. Or set the option globally in your npm config file.



> There really should be a way to ban people doing this shit.

???

There is. It's called "doing nothing."

It takes work to add a dependency to your project; they don't spring out of nothing.



The author hasn't forced you to import their package. What's there to ban apart from maybe polluting the common namespace?


That's hilarious. The best part is that he didn't even try to make good packages (not that hard) but just took the lazy route.


Well one reason could be this software proverb: Good programers are lazy programers.


It's weird to me that in every other comment thread on Hacker News, people say "good software should do one thing and do it well". And then the topic moves to NPM, and suddenly everyone loses their mind?


A little copying is better than a little dependency.


This is part of the UNIX philosophy, but sometimes this gets taken way too far. There is of course the UNIX command "yes", which literally just prints the character "y" over and over again (to skip interactive prompts).


GNU yes will actually repeat any string—the default is of course "y". So it's very handy for scripting, and not really as single-purpose as you'd think.


also, it's actually faster than reading /dev/zero if you need some filler data hahahaha



Yeah it's definitely useful. Still a bash one liner though:

  function yes { while true; do echo "${1:-y}"; done }


Haha ok do something substantial! It's implied.


So what excuse do the people installing these packages have?


I'm relatively new to the world of node. Is there anything objectively wrong or should I say nefarious with that this Jon person is doing? I guess, what's the issue here, from your perspective, if he's creating package (that albeit are simple) but some some amount of utility?


In my view it is objectively wrong to create trivial npm packages yes. If we look at the npm ecosystem as a commons, that person is polluting it. Of course you could say it's namespaced to one account, so what's the harm? In my view, plenty:

- package searches will show these packages due to the inflated usage from transient deps.

- installs are slower due to the package noise.

- increased attack surface when they are used

- cultural normalization of throwaway packages

Probably more.



Each dependency causes work for any serious use. You got to check license, got to check for updates, risk supply chain attacks (package disappearing, package replaced with bad code, ...) etc. which causes longer term cost.

In addition abstraction of trivial checks, makes it harder to see the limitations of said routine. How well does it work on numeric strings? How well on large numbers where float properties cause issues?



This is the first time I'm hearing about this guy, but not sure why there's so much hate.

It looks like he came from a non-technical background, and is trying to make the language more noob friendly.

Compare the pair:

if isEven(n) { }

if (n % 2) == 0 { }

I guarantee you my wife would have no idea what the second snippet even means.



There's an argument to be made for writing functions like `isEven` and using them instead of n%2. There's an argument that the JS standard library, or other comprehensive util libraries like Lodash or Underscore should include these functions.

The problem here is introducing separate dependencies for each of these tiny functions. Dependencies are code that you haven't written, but are still your responsibility. For a lot of things, that's a good tradeoff: if you don't have the expertise in a specific area, or if you can offload work to a dependency that you trust, that's great. But for micro dependencies like this, it's usually a bad deal - you don't get anything in return (seriously, how hard is it to write your own isEven function?) but you have to rely on a third party to be secure, to not push anything accidentally broken, to not change the API, etc.

(I think it's also worth pointing out that your wife is not a paid programmer. Software development should be accessible, but this isn't the only goal, and I think it's reasonable to assume that most programmers either understand the n%2 idiom, or know enough to be able to find help on the subject.)



The hate is deserved. This person lacks ethics.


if !(n & 1) {} // rightmost bit(LSB) is always `true` for odd integers


For code golf purposes, ~n&1 works fine


// FIXME: This breaks on negative numbers on ones-complement machines


AussieWog93 should marry this guy instead.


Achieving supply-chain security by controlling every link in the chain. This Jon is very noble.


Packages should specify their entitlements.. network access, file access, executing system calls, being able to monkey patch things, accessing packages outside their own.


Deno has a few of these by default an application has no access to the filesystem, env or network.

You can allow only reading or only writing for files and you can also define what domains are allowed.



Sure it's a huge step forward. But that's on an application level, which honestly I can do with sandbox-exec on macOS (see below).

In Deno, all permissions would still propagate down to the dependencies.

  (version 1)
  (deny default)
  (allow file-read*
    (subpath "~/Downloads")
  )
  (allow network-outbound
    (remote tcp "localhost:80")
  )
or

  (version 1)
  (allow default)
  (deny network*)
or

  # start an airgapped shell, and play around in there..
  sandbox-exec -p "(version 1)(allow default)(deny network*)" bash


There have been a few GitHub issues open but from what I can tell it's in the "not technically possible with V8 VM" category


Realistically I think they long ago should have been banned from npm, with all submissions deleted.


You may not like it, but this is what 10x performance looks like


It seems very uncharitable to describe someone as a “spammer” because their philosophy on the proper size of units of code reuse is different from yours.


It seems overly charitable to me to frame the dispute in those terms. You and I might differ as to the appropriate amount of vermouth in a martini or pineapple on a pizza and I won't worry much about it, within reason.

There's a limit though! I think up to about 10% pineapple by weight is reasonable for a Hawaiian, if you choose 0 or 20% then we'll have no issue. If you went to 30% I don't think I could stop myself writing a libellous remark on hn. Anything about 50% and I would be morally forced to denounce you to the proper authorities. About 80% is where the nightmares begin.



At a certain point, isn't it just grilled pineapple with pizza crumbs?


Even if you think having a dependency load `is_even()` is a good thing, surely you can see how it's pretty hard to defend having a completely separate and nearly identical `is_odd()` rather than just `!is_even()`


The argument is that non-programmers like his wife don't understand e.g. `!`.




Amazingly, following “don’t repeat yourself” in its purest form, is-even depends on is-odd:

  /*!
   * is-even 
   *
   * Copyright (c) 2015, 2017, Jon Schlinkert.
   * Released under the MIT License.
   */

  'use strict';
  
  var isOdd = require('is-odd');
  
  module.exports = function isEven(i) {
    return !isOdd(i);
  };


If this code runs on hundreds of millions of phones and computers, npm seems horrible for the environment! Not only because of runtime, but because we’re talking so much just to host and serve these modules!


These packages are performance art, and shall not be judged by efficiency metrics.


I bet a dollar you started writing "performance" and then rewrote it as "efficiency" :)


Good guess, but sorry :) You can pay at https://github.com/sponsors/fritzo?frequency=one-time


That’s right. Npm and the entire node ecosystem is just terrible. Specifically because you can get a production ready package from even Google, and you know it’s going to have some random useless package in the chain somewhere.


I haven't heard about him but checking the source tree of our 2 front-end apps and his 'is-number' package (which his is-odd depends on), seems to be imported by quite a few other packages.

Now looking at the source, that package may make sense if figuring out whether something is a number type in JS is really that cumbersome. (Though I'd expect that there is a more generic package that covers the other built-in types as well.)

Also since isNumber treats strings that can be converted to a number, a number, it can yield weird results since adding two strings will naturally just concatenate them. So e.g.:

    const a = '1'; 
    isNumber(a); // returns true
    const b = a + a; // Now you have a string in b: '11'

Of course, it's standard JS stupidity (and 2*a would be 2, and 1+'1' and '1'+1 would both be '11'), but then maybe stating that '1' is a number is not the right response. However, the package was downloaded 46 million times last week and that seems to be so low only because of Christmas. The previous weeks averaged around 70M. And most of these are dependencies, like in our projects, I'm sure.


I did a nullll package [1] whose only purpose is to export a null and takes 400MB in memory but somehow it got flagged in HN [2]. With 41 stars on github and 100% test coverage [3], it was clearly production ready.

[1]: https://github.com/mickael-kerjean/nulll

[2]: https://news.ycombinator.com/item?id=17072675

[3]: https://github.com/mickael-kerjean/nulll/blob/master/test.js



That try-catch block[1] is a thing of beauty.

[1] https://github.com/mickael-kerjean/nulll/blob/master/index.j...



Shame it was flagged. Would've been handy for a project I was working on. Had to work around not having a performant null


Actually it’s not good enough there because JavaScript numbers are f64 rather than u32. Even if you only support the safe integer range (beyond which the answers become somewhat meaningless), that’s 2⁵⁴, more than four million times as large as 2³². Not sure how big the machine code would be, but I’m guessing it’ll just add 4 bytes (40%) to each case, so you’re up to something like 224 exibytes. And that’s when you’re being lazy and skipping the last ten bits’ worth. If you want to do it properly, multiply by another thousand. Or maybe a little less, I haven’t thought too deeply about NaN patterns. Or maybe just infinite if it supports bigints.


But you can create a self-modifying function that, ‘for speed’, adds each case when it is called.

The initial function should start with checking for special cases NaN, infinities, 0 and -0, then do (may not be valid JavaScript)

  if((x++ === x) && (x—- === x)) printf("even\n");
to handle cases over 2^54 or below -2^54, then do

  isOdd = true
  neg = x
  pos = x
  while(true) {
    if(neg++ === 0) break;
    if(pos-- === 0) break;
    isOdd = !isOdd
  }
to determine whether x is odd or even in isOdd. Doing the self-modification is left as an exercise.

This is good defensive programming. It avoids doing a tricky division by 2.0 that might be buggy (I don’t think https://en.wikipedia.org/wiki/Pentium_FDIV_bug was affected, but who knows what other FPUs do?)



But we can probably compress the data very well (e.g. "double delta" is always zero), and then decompress only needed parts.


This still requires reading through it all in memory at run time. Maybe we can optimize it with a JIT -what if the package called the ChatGPT api to get the the python code that generates the machine code just for the number queried?


The funniest (or best part) about those two packages is that is-even has a dependency to is-odd and does just negate the output of is-odd.


It’s true, the great thing about clean, reusable, modular code like this is that you can compose both of these packages to make a is-even-or-odd package.


Well first you need to obviously build an OR package, part of your suite of logical operator packages, all depending on your battle-tested, high performance XOR package.




Groundbreaking!


If would be even funnier if is-odd would simultaneously do the same thing in reverse.


Implementing the comparisons by hand seems difficult and primitive. May I suggest introducing some helpful sub-packages and building the solution on top of those. For instance, is-odd would be implemented by using is-one, is-three, is-five, is-seven, etc.


and is-one should be implemented by negating is-two, is-three, and so forth


I read some text where a cheme was implemented such that numbers were `2 = 1+1`, `3 = 2+ 1`...


Perhaps Nock:

https://developers.urbit.org/reference/nock/definition

> The reader might wonder how an interpreter whose only arithmetic operation is increment can ever be practical.

> The short answer is that a Nock interpreter doesn't have to use the algorithm above. It just has to get the same result as the algorithm above.





>Succ Peano

...



The standard joke about my university's grad-level programming languages course is that its formalization unit does a great job of identifying undergrads by discussing Peano naturals and Hoare logic in the same lecture...


Theoretically only one 40gb file would be needed. AFAICT almost every odd number is not even (I cannot say "all" bc I have not checked every integer yet).


That's what unit tests are for, right?


Interesting that there are so many more weekly downloads for is-odd. This seems to suggest that there are many more odd numbers than even.

(Alternatively, perhaps odd numbers are actually much rarer, leading to them having a higher market value, and thus more interest in discovering them.)



The isEven package uses the isOdd package…


I did something similar six years ago called gg-flip for npm - https://github.com/avinassh/gg-flip

It was submitted in HN too - https://news.ycombinator.com/item?id=16194932



I was wondering what the code looked like...

    'use strict';
    
    const isNumber = require('is-number');
    
    module.exports = function isOdd(value) {
      const n = Math.abs(value);
      if (!isNumber(n)) {
        throw new TypeError('expected a number');
      }
      if (!Number.isInteger(n)) {
        throw new Error('expected an integer');
      }
      if (!Number.isSafeInteger(n)) {
        throw new Error('value exceeds maximum safe integer');
      }
      return (n % 2) === 1;
    };


I mean... when you look at it like that, at least it's got the error checking integrated as well... tho the fact it pulls in is-number is fucking hysterical lol




The implementation of is-even is mind blowing (numbing?).

    'use strict';
    
    var isOdd = require('is-odd');
    
    module.exports = function isEven(i) {
      return !isOdd(i);
    };
good thing it declares its dependencies properly!


Big fan of code reuse. Small packages providing single unique functionality is the way to program modern systems. It has to be right if bright minds in NPM / Rust community follow this approach.


I'd fire someone who imported those packages.


Even better if it built-on-install by executing another script to generate everything (naturally using %) New nerd game: enshitification inverse golf.


Every time I see these packages mentioned, I’m reminded of an interview with Joe Armstrong. In it, he said, (paraphrasing from memory) “Wouldn’t it be interesting if a language had a global, flat registry of functions that anyone could pull from and contribute to? That way, bugs are fixed in one place, there’s less reinvention of the wheel, things get more correct over time, and programs just become a simple composition of standard functions.”

I may be misremembering his meaning, but I remember thinking it was an interesting idea. It wasn’t obviously a terrible idea. I thought it would be like the Clojure standard library on steroids, especially if it was coordinated and vetted by a decent team.

But alas, NPM has proven it otherwise.



I don't think NPM has proven that idea infeasible, just that it may not be a good idea to depend on third-party content that may change under your feet, on such a fine-grained level.

But have a look at the Unison language https://www.unison-lang.org/docs/the-big-idea/ , that has such global registry but addresses each function by a hash of its syntax tree, and thus sidesteps the issue.



How are bugs "fixed in one place" that way?

You either accept updates of third party packages, with little to no vetting of your own, and get fixes "for free" or… you don't.

There's no middle ground: the only way to save effort is to trust others.



NPM is a worse idea than Joe's as Joe was talking about a single community vetted monorepo. Not a free for all of individual repos.

With Joe's idea everything is up front and part of the language and not a bulletin board of packages near the checkout line at the git supermarket. This way simple stuff like isint() can make its way into the languages official standard library. This should eliminate the uncertainty of 3rd party packages maintained by a random number of individuals that can be taken down or tainted at any time.



I wonder if it might be practical to bootstrap such a thing off of Wikifunctions, with a process to vouch for a function as "important enough to merit inclusion" and tooling to ensure that the Wikifunctions implementations actually agree, plus something to synthesize/translate a first-pass implementation in the desired language?


NPM doesn’t capture the full benefit of an open registry of functions because, while anyone can fork and create an alternative @christophilis/is-even or @divbzero/is-even, there is no good way for developers to pick the best version of is-even.

Maybe if NPM required all package maintainers to use a namespace, the global is-even package name could resolve to the most-used fork.

So for an import like:

  import isEven from 'is-even'
You could install a specific fork:

  npm install --save @christophilus/is-even
Or default to the most-used fork:

  npm install --save is-even
In both cases, package.json would only contain namespaced dependencies and npm outdated could flag if a different fork is more commonly used.


This Joe Armstrong quote sums up NPM:

"You wanted a banana but what you got was a gorilla holding the banana and the entire jungle."

(originally on object oriented programming)



To be fair, for all its pitfalls, NPM does provide a limited version of that idea to millions of codebases.


But of course WebAssembly is the future. So we should reimplement it in WASM.


The joke is completely lost on me. I don't even mean the person who did this — why not, after all. 1198 upvotes ATM is what confuses me.

Lookup tables for computable values are not novel, nor are they a joke. This actually is a solutions for the time/memory tradeoff, as author well knows. The problem at hand is absurd, but quite primitive, so there was absolutely no doubt this can be done. No real measurements were performed, apart from the observations that it took about 10 seconds to chew through 40GB program on his computer. So what did we learn? That exe files cannot be more than 4GB? That a program with 2^32 ifs is about 300 GB? Why 1198 people found this interesting?

Maybe I'm spoiled, but unlike "Hexing the technical interview" or some SIGBOVIK stuff, this doesn't seem crazy, just pointless.



The joke is that he actually did it. People have been joking about it for decades. But, the crazy bastard did it for real. It’s so extreme that no compiler could handle it. Not even any known assembler. So, he had to generate his own machine code binary to make it work. But, it works! Nutz!


> Lookup tables for computable values are not novel, nor are they a joke

It's been ages since I've done anything low-level, but I don't think 4 billion if-statements are compiled to a "lookup table" when optimizations are turned off. Each and every if statement would be evaluated in order to determine if it matches the input. This seems to be supported by OP's program outputting in much less time for small numbers than for large numbers--since the small numbers appear earlier in the code.

A 4 billion case switch statement on the other hand, I would expect to compile to some kind of lookup table. Though when the data type is an unsigned int even then I'm not sure what the compiled code would look like without optimization.



Would a compiler be smart enough to generate a big table for this?

I wonder if it’s just so big the optimization pass would give up, or perhaps makes tons of tables of 256 entries each.



Sometimes people do things to be funny.


My understanding was that it was a parody of these blog posts satirising the pointlessness of pushback against conventional wisdom. Rather dry.


why though? this is exactly what databases have been invented for! One could simply store a mapping of numbers to their classifications as 'even' or 'odd' in an SQLite database. This approach has the added benefit of not requiring program updates whenever a number's classification changes from odd to even.


Databases still have to be maintained and updated. You're better off setting up an Ethereum contract with proper economic incentives for others to act as an oracle and return the proper answer at any given time.


Anyone want to try to figure out the gas costs for deploying 30GB worth of smart contracts? xD


    CREATE TABLE even_or_odd (
      is_odd NUMBER(1);
      is_even NUMBER(1);
      is_zero NUMBER(1);
      is_one NUMBER(1);
      is_two NUMBER(1);
      is_three NUMBER(1);
      -- ...
    );
    INSERT INTO even_or_odd (is_odd,is_one) VALUES (1,1);
    INSERT INTO even_or_odd (is_even,is_two) VALUES (1,1);


A better design would look like this:

    CREATE TABLE numbers (
      minus_four_billions_two_hundred_ninety_four_millions_nine_hundred_sixty_seven_thousands_two_hundred_ninety_six_is_even BOOLEAN,
      ...
      zero_is_even BOOLEAN,
      one_is_even BOOLEAN,
      two_is_even BOOLEAN,
      ...
      four_billions_two_hundred_ninety_four_millions_nine_hundred_sixty_seven_thousands_two_hundred_ninety_six_is_even BOOLEAN
    )
That design can store multiple versions of the data, making it more resilient to future changes in the even/odd property of numbers.


Yes, this seems like the kind of thing that should be in Wikidata. Then you also don't have to keep the database locally and can simply do a quick HTTPS request. (The only problem might be if TLS itself needs an even/odd function, but it probably doesn’t.)


I like this option just because then it makes it easy for anyone to update the data when future changes arise.


> The only problem might be if TLS itself needs an even/odd function, but it probably doesn’t.

Doesn't sound like its ready for cloud scale.



Correct, but you'd certainly want to use an XML database.

Not only it helps with portability of the data, it also keeps it in a human-friendly format should the need to inspect it by hand arise.



XML is not web scale though, if you want to serve lots of users it has to be JSON in MongoDB.


AWS's Elastic Cloud Parity already provides this, and is far more scalable.


This is one of the most entertaining articles I've ever read here. He should put the source code online so ChatGPT can "learn" from it.


That would definitely violate his strict license:

    /* Copyright 2023. All unauthorized distribution of this source code 
       will be persecuted to the fullest extent of the law*/
And with such elegant code, who can blame him?


To what extent does the law allow persecution?


LOL. "persecuted" -- I just noticed this!


OpenAI see.

OpenAI ignore.

OpenAI train.



This is amazing technology. You should sell it to AWS so they can offer an Enterprise-ready AWS EvenOrOdd API for everyone who does not know how to host the 40GB executable properly. With the power of the cloud this programme would be unstoppable


It’s just begging to be a Lambda function.


I'm surprised no one has chimed in on how the program is "processing" 40GB of instructions with only 800 MB/s * 10s of disk read.

If I had to hazard a guess, there's some kind of smart caching going on at the OS level, but that would entail the benchmark of "n close to 2^32" not being run correctly.

...Or that the CPU is smart enough to jump millions of instructions ahead.



> my beefy gaming rig with a whopping 31.8 GB of memory

So a rerun of the program should only need to load ~8GB if the filesystem caching is somewhat loop/scan resistant.

My first thought was "probably the math is wrong", but looks like it adds up to something reasonable - especially as all numbers are rather vague / rounded (e.g. let it be 12s) and the number was just high, not absolute maximum.



I guess this would be the expected, though boring, answer -- "incorrect" benching (i.e. clear the cache first).


My guess is either compression or stuff lingering in RAM. The CPU can't be smart here since it doesn't know what any of the future ifs will be. It doesn't know they're in order, or unique, or even valid instructions. You could (theoretically; the OS probably wouldn't let you) replace an if with an infinite loop while the program is running.


Could it be the branch predictor of the CPU somehow catching on to this? Perhaps something like 'the branch for input x is somewhere around base+10X.


> Perhaps something like 'the branch for input x is somewhere around base+10X.

That's unlikely. Branch predictors are essentially hash tables that track statistics per branch. Since every branch is unique and only evaluated once, there's no chance for the BP to learn a sophisticated pattern. One thing that could be happening here is BP aliasing. Essentially all slots of the branch predictor are filled with entries saying "not taken".

So it's likely the BP tells the speculative execution engine "never take a branch", and we're jumping as fast as possible to the end of the code. The hardware prefetcher can catch on to the streaming loads, which helps mask load latency. Per-core bandwidth usually bottlenecks around 6-20GB/s, depending on if it's a server/desktop system, DRAM latency, and microarchitecture (because that usually determines the degree of memory parallelism). So assuming most of the file is in the kernel's page cache, those numbers check out.



I doubt it, branch predictors just predict where one instruction branches to, not the result of executing many branches in a row.

Even if they could, it wouldn’t matter, as branch prediction just lets you start speculatively executing the right instruction sooner. The branches need to be fully resolved before the following instructions can actually be retired. (All instructions on x86 are retired in order).



There's also anticipatory paging, the OS can guess which pages will be requested the next time around.


It can't be the CPU since it is actually memory-mapped code, and the branch predictor surely cannot cause page faults and so cannot page in the next page of code.

Truly curious. I guess the linear access pattern helps but 800 MiB/s?



Very likely the majority of the file is paged in, and can be prefetched just fine. The author doesn't clear/disable the page cache.


I assume modern branch predictors are capable of picking up a trivial pattern like this.


It mmaps the program, unused pages then only take a page table entry and are not loaded. The only page actually loaded is the one he directly jumps into. Neat trick.


It's not jumping directly, it's executing sequentially, comparing against each consecutive number in sequence.


You are right, my bad. Thanks for clarifying it.


I'd like to see a fully distributed version. All you need is 4B hosts (IPV6 FTW) named N.domain.com (where N varies from 0 to 4B-1). The driver app sends N to the first host (0.domain.com). Each host compares the incoming N with their N.domain.com name; if they match, return the host's true/false value. If they don't match, forward the request to (N+1).domain.com and return the result.


N+1 may overflow; using N-1 is more robust.

Also, I think this can easily be implemented; you don’t need 4B hosts, just 4B DNS entries, and those, you can create with a single wildcard entry (https://en.wikipedia.org/wiki/Wildcard_DNS_record)



Don't give Suckerpinch any ideas


Jokes aside, lookup tables are a common technique to avoid costly operations. I was recently implementing one to avoid integer division. In my case I knew that the nominator and denominator were 8 bit unsigned integers, so I've replaced the division with 2 table lookups and 6 shifts and arithmetic operations [1]. The well known `libdivide` [2] does that for arbitrary 16, 32, and 64 bit integers, and it has precomputed magic numbers and lookup tables for all 16-bit integers in the same repo.

[1]: https://github.com/ashvardanian/StringZilla/blob/9f6ca3c6d3c... [2]: https://github.com/ridiculousfish/libdivide



When I was in college, I had to take an assembly language course. It was on MIPS: 32 registers.

One assignment said, roughly:

- Read ten numbers into an array.

- Sort the numbers into the array.

Nothing said I needed to read the array, so I read in the numbers, both to the array and registers, hardcoded a bubble sort on the registers, and wrote the result to the array. End of program.

I was being cute. I got full marks with a note to stop being cute.



Visionary genius Ross van der Gussom is my new favorite mythological creature.


Think of Python as a way to script C, and skip most/all compiling. If your Python is slow, you are probably doing it wrong.

Recommend reading this: https://cerfacs.fr/coop/fortran-vs-python



I was a teaching assistant in a "data structures and algorithms" course where students could choose either Java or Python. Most of the labs were the same except that the treemap lab had to become a hashmap lab for the Python version because it was so excruciatingly slow.


If you want to see C's _true_ scripting language, check out Lua.

It can be embedded in your C application, unlike Python which is the other way around (Lua is like 80KiB, Python is several MiB).



Not all Python is just a front-end to fast C code. That only really works with numerical computing of big tensors.

Though I'd still agree - if your Python is slow you're doing it wrong - you shouldn't be using Python!



You can also call eg Rust or OCaml code from Python, doesn't have to be C.


Of course. When people talk about calling C from Python they really mean "a fast language".


If you switch away from Python, you are probably sub optimizing


There are reasons other than speed that makes people move away from Python.

Btw, recently CPython has sped up a lot. Between versions 3.9 to 3.12 many programs run about twice as fast. Much of that improvement is thanks to the 'faster-cpython' project (which Microsoft is generously funding).

I contributed about a 1% speedup during that time, too.



I ran a web search to figure out if “Ross van der Gussom” was some sort of inside joke. The top two search results were OP and the parent comment.


I assume a play on Guido van Rossum the creator of python but I'm unaware of any meaning deeper than that if that's what you're looking for.


It's just a funny misphrasing of his name.


I'd say wrap this with a good interface and we can optimize it later.


I’m pleasantly surprised that nobody started the discussion on whether zero is odd or even ;-) https://en.wikipedia.org/wiki/Parity_of_zero


I always found it funny that this is printed as a reminder text on Magic the Gathering cards that care about even or odd values: https://scryfall.com/card/iko/88/extinction-event?utm_source...

But thinking about it, I have no doubt that Wizards found out this confusion is somewhat widespread during playtesting, and printing a short reminder was an easy fix.



I asked a few LLMs whether zero is an even or odd number to see how they fared on such a widely misunderstood question:

LLAMA 2 7B: 3x even, 2x neither

LLAMA 2 13B: 1x even, 4x neither

LLAMA 2 70B: 1x even, 4x neither

GPT 3.5: 20x even despite trying to trick it

Mistral 7B: 1x even, 3x neither, 1x even+neither?!

Mixtral 8x7B: 8x even, 2x neither



I find it hard to understand how someone can be confused about that. But I dimly remember talking to people with this confusion in the past.


Please don’t take this as defending this thinking. I’m just guessing at how someone could get to this confusion.

I can see how you’d get there by thinking of numbers as a thing for counting. Even numbers give you piles of two without one left over. Zero fails the first condition since it gives you no piles at all. But it doesn’t leave a pile of one, so it’s not really odd, either.

If you ever find yourself confused in this way about definitions consider: is this definition serving me? Could I adopt a different definition that’s used by people really good at this sort of thing?

(Zero is even because any integer n that can be formed by 2k for integer k is even, and that can be formed by 2k+1 is odd. 2x0=0)



I am fairly certain I can split an empty pile into two empty piles, without a remainder.


Hmmm by definition an empty pile is not a pile though, is it? So there's nothing to split?

;-)



Hmm, I think that's how normal people approach the question (implicitly), and that might be how they think zero is not an even number.

At least this would explain their reasoning in terms I can understand.



And an empty plate is not a plate, also by definition? Okay, sure.


Ah I suppose if you see an "empty pile" as still a vessel with nothing on/in it, your point stands.

But a plate is its own thing, in addition to what goes on it. A "full plate" is full of what? And a full plate is the plate plus whatever is on it, not just the stuff on it.

I think of a "pile" of stuff as being it's own thing, that being the pile itself. An empty pile is the absence of the thing.

That was my interpretation, but I see what you meant. :)



well that's odd


Your defense boils down to people having an insufficient understanding of zero as an ordinary number, and still clinging to the concept that zero means "nothing" or is otherwise magic.

This hints at a failure of math education.

As an analogy, many (usually, but not always, weaker) programmers still have magic ideas about booleans and comparison operators, and write nonsensical stuff like if (a == true). When you ask them, it's invariably that in their mind, there's mystic connection between comparison operators and if statements.



> When you ask them, it's invariably that in their mind, there's mystic connection between comparison operators and if statements.

It doesn't have to be mystic. It's perfectly fine to design a language that works like this. It wouldn't be a good language, but it would be possible.

Just like PHP didn't use to support constructs like `f(10)[2]`, that used to be a syntax error. So you needed something like `x = f(10);` first, before accessing `x[2]`.

If you saw that kind of construction with the intermediate variable, you might also accuse the programmer of imagining mystic connections.



Matlab still doesn't handle f(10)(2)... (The _(_) syntax is both function calls and indexing there.)


You are giving me flashbacks to memories I thought I had successfully repressed.


> Your defense

I literally… never mind.



Ha ha, I didn't even realize this when I typed the answer. I hope it was clear that I didn't mean to imply this was your personal opinion.


It wasn’t clear then, but I can certainly see how you’d mean it that way. It was “mine” as in “I brought it”.


Yes, it all boils down to definitions. Definitions that have 0 as an even number work well in math.

For most people's daily life, it doesn't really matter which category zero would fall under (as they never really eg consider dividing zero items evenly between people.) So their (implied) definitions can be all over the place.



>Even numbers give you piles of two without one left over. Zero fails the first condition since it gives you no piles at all. But it doesn’t leave a pile of one, so it’s not really odd, either.

This definition does not naively extend to negative numbers, which are anti-piles of things. You are right, of course, about definitions serving their purpose. In this case maintaining the symmetry of alternation requires 0 to be even, and that argument could even be extended to negative numbers. (Of course other numbers, like the rationals and reals, are pure fiction and can safely be ignored negative or otherwise.)



> (Of course other numbers, like the rationals and reals, are pure fiction and can safely be ignored negative or otherwise.)

I think natural numbers are already pretty fictional.

Btw, you might want to look into p-adic numbers.



It's apparently confusing enough that if you ban odd license number plates then the cops don't want to arrest anyone whose license plate ends in 0 because they're not sure if it is even or not. At least that was the case in Paris in 1977, when they alternated between odd/even license plates every day to limit the number of cars.


Interesting. Do you have a source for that?


It looks like bans for odd/even license plates were used in Paris in 2014 [1] and 1997 [2], but not before then. However, a similar scheme was used to ration gasoline in the US during the 70s [3].

The only source I can find for the claim about police confusion is the one cited by Wikipedia [4], whose reliability I'm inclined to doubt based on the 1997/1977 discrepancy.

[1] https://www.npr.org/sections/thetwo-way/2014/03/17/290849704...

[2] https://www.wired.com/1997/09/paris-smog/

[3] https://www.npr.org/sections/pictureshow/2012/11/10/16479229...

[4] https://en.m.wikipedia.org/wiki/Odd%E2%80%93even_rationing#D...



The wikipedia article matched my vague recollection of the event, but if they got the year wrong then it may as well just be a rumour at this point.


Thanks!


The thing that gave me pause the first time I thought about it (it was during a test so I couldn’t ask or check) was that if zero is even, that means (despite numbers being infinite) there’s one more even number than odd numbers.

I also knew that 1 is not prime, even if logically it should be. Its definition specifically states a prime number needs to be greater than 1. So that means mathematics sometimes has exceptions for numbers inside a definition.

Given that, it’s not immediately obvious that zero would be even. It wouldn’t be odd either, that was out of the questions, but it could be neither.



> The thing that gave me pause the first time I thought about it (it was during a test so I couldn’t ask or check) was that if zero is even, that means (despite numbers being infinite) there’s one more even number than odd numbers.

Thanks to Hilbert's Hotel you can re-arrange the numbers to have an arbitrarily larger or smaller overhang of even or odd numbers.

https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Gra...

> I also knew that 1 is not prime, even if logically it should be. Its definition specifically states a prime number needs to be greater than 1. So that means mathematics sometimes has exceptions for numbers inside a definition.

The definition I use is that a prime number needs to have exactly two distinct divisors. No need for any special cases in this definition.

(The motivation for this somewhat strange definition is so that factoring any positive number, including 1, into a multiset of her prime factors is unique.

If you redefine the prime numbers to include 1, prime factoring is not unique. Of course, you could still make everything work out in the end: you just need to declare that your new-prime factoring is unique up to the multiplicity of 1s.

Just like eg standard decimal numbers don't change if you add leading 0s, new-prime factoring would not change if you add factors of 1.

You can do math with almost any arbitrary definitions, if you add enough exceptions and explanations in your theorems to work around the rough edges in your definitions.)



One might even say it's odd.


I particularly liked this bit

> Not only is 0 divisible by 2, it is divisible by every power of 2, which is relevant to the binary numeral system used by computers. In this sense, 0 is the "most even" number of all.

in the arena of even one-up-man-ship, zero wins.



Is that article about the IEEE 754 positive zero, or negative zero, or both?


When I was a junior developer fresh out of college, another junior and I noticed a bank sign with the temperature showing to be -0 degrees. We made a few comments about Two's Complement and felt a bit smug in our superiority over the manufacturer of the sign. Decades later I read an article about how -0 degrees is a standard in civil meteorology for "below freezing but rounded up to zero". Though it took a while to learn, it was a good lesson in making assumptions about standards in other domains.


This is the kind of discussion that matters only for the very theoretical edge cases or for the pop-math inconsequential discussions

0 is even

I mean, it is the first phrase on the linked fine article:

> In mathematics, zero is an even number.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com