(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39046838

之所以会出现混乱,是因为文章中的语言风格和结构在整个材料中突然且频繁地变化,这可能会导致误解或歧义。 此外,引用的短语包含技术术语(“SQL 注入”)并引用法律框架(“StGB 第 202b 节”),在没有先验知识或专业知识的情况下造成潜在的理解障碍。 然而,进一步的解释和澄清可以缓解这些挑战,让人们更清楚地理解材料中提出的总体概念。 总的来说,如果没有专业背景或不熟悉计算机科学和德国刑法,这篇文章似乎很难理解。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
German developer guilty of 'hacking' for exposing hardcoded credentials in app (infosec.exchange)
294 points by zoobab 13 hours ago | hide | past | favorite | 211 comments










Article/title is a bit confusing and perhaps borderline clickbait, so...

If I understand correctly it seems like his crime was *using* the exposed database credentials to log in to the third-party database server.

So he wasn't charged for simply "exposing" the credentials as the title says, but actually using them to poke around.



It is basically impossible to know what a system is without accessing it and looking around.

It is like being given a key card for security clearance in a building. You assume any door it opens is a room you're allowed to be in. If security finds you in a room you aren't supposed to be in, is that your fault? Or whoever gave you the card with the wrong clearance level?

Also how about the situation where you open a door, look inside, immediately realize you're not supposed to be there and then report it to security? Should you be punished?



I'll argue the other side:

It's more similar to finding a key hidden under the mat at someone's house. You can then contact the owner and inform them of the security issue but what you should not do is use the key to open the door and go in and see if there is really harm in you being able to enter. Because you might accidentally achieve the exact thing that a criminal wants such as finding a note with a password on it. You can then claim you didn't want to find it but the fact is that 1) you broke the law by entering and 2) you caused a malicious event (namely obtaining a password).

You can then pinky swear that you didn't already use it for any further malicious actions but that will be difficult to verify.

If I ever lose my key, I don't want people to enter my house to prove that they can. Inform me of how you obtained the key and I'll change the locks and make sure I don't lose my key again. If you do enter my house, expect me to press charges.



It wasn't hidden though.

Lets pretend the guy was a pest exterminator, hired to kill some bug infestation. He is given a keycard to access most of the building. As he is hunting down the nest, he finds a hole in the wall. Bugs tend to come through holes in walls, so he goes in to figure out whether what he is looking for originates there.

He enters the room on the other side, and looks around for other holes that might have bugs. He then notices a file with big red letters saying "TOP SECRET". Turns out he accidentally entered the maximum security file room. So now he leaves the room, goes to security and tells them what he found. Then gets arrested for 1 count of trespassing and 1 count of breaking and entering.

How is that fair?



Exactly. As the original article states, he just didn't assume he'd stumble onto a sensitive database.

> According to the defendant, the defendant has first assumed that the software on his customer's server will connect to a Modem Solution database that was only intended for his customer and contained only his data. From the read-out database name, this sounded quite plausible. However, the defendant quickly discovered that the corresponding database contained much more information.



> It wasn't hidden though.

If a key is taped to the outside of the door that it opens, you still can't use it without committing a trespass. Not unless you got authorization to use it from the right person(s) first. Shitty security isn't a legal invitation.



Hmm, big caveat there is that in most jurisdictions that would only be considered trespass if you’ve been pre-warned that you’re not allowed in.

The “key” analogy though is a bit stretched.

It is clear that here, the credentials were used in a way other than intended.

The questions are now rather “is it reasonable to expect that a person interacting with a website normally would reasonably be permitted to use anything sent to them by that website in any way they choose?”

A lot of us here are more tech-minded and would likely say “yes, if you don’t want people using credentials, don’t provide them”. The courts on the other hand may take a different view and say “well we already restrict what people can do with content on a website through copyright laws, this person should not automatically have the right to use those credentials in any way other than the way provided”.

It will become even more complicated if the website has terms of service which clearly state something which has a similar technical meaning to “no trespassing”’s physical meaning.



I think an electronic keypad works better as a example.

You get to a door, and there is a postit note on the door saying "Keycode is 2424".

You know your job is in the building. Your keycard let you into the room with the door. Therefore, if this door lead to a "high-security" area, surely they wouldn't put a postit note there? Maybe, as the bug exterminator, or maintenance person, it is expected that I should be able to enter that room?



The key was taped to the outside of the door explicitly to be used by the customer though.


No it wasn't intended for customer use, the customer wasn't supposed to be directly hitting that API. There was reverse engineering that had to happen before recovering the plaintext password. He was already someplace fairly iffy for him to be. It was much more like being on the front porch of a house you don't own and finding a key.


If that house (the app) was built inside my living room (my phone),

then how does the analogy play out?



Enough with building comparisons.

The vendor put the master keys of the main database in a publicly-accessible package. It’s insane.

Of course the researchers tested the keys, to check whether it was true. If it can be proven that it was used for nefarious reasons, then ok for blaming him, but as far as we know, the researcher just used it to get convincing evidence and left the system.

The company is to blame 100%, and the research should get an encouraging award for white-hat reporting. Not this.



In that case, it's his job to enter holes.

Nobody asked this developer to go through any holes. If you see a hole in your hotel room leading to another room, you don't go through it to see if there is anything to steal, even if you go through holes a lot for your job. You inform the owner that there is a hole.



His job was to debug the program though, so wouldn't that literally be all about poking around holes?

His job was to solve a problem with a database's log files. He found another second database. "Maybe this second database is causing duplicate log files?" so he goes in and looks.

It makes sense to me. Based on the german article: (machine translated to english)

> The software of Modern Solution had a database of the defendant's customer "fully littered with log reports," says the defence lawyer. His client was then given the order from his customer to solve this problem.

> The defendant then determined that the software from Modern Solution established a MySQL connection over the Internet to the servers of the Gladbeck company. ...the defendant has first assumed that the software on his customer's server will connect to a Modem Solution database that was only intended for his customer and contained only his data. From the read-out database name, this sounded quite plausible.

> However, the defendant quickly discovered that the corresponding database contained much more information. It later turned out that the data here was included Modern Solutions customers and from all end customers whose online shops were included. According to his own statement, the defendant had directly separated the database connection when he discovered that he had access to other customers' data.

> the programmer contacted the affected company with the help of a tech blogger, which then closed the security gap and displayed the programmer at the police.



Exactly. Maybe he could just be exploring an API.

It isn't hidden, maybe this is how I'm supposed to get data.



Unfortunately, like many physical security analogies, there's no one correct way of translating the details from the software world to the real world.

I think how close you think those two situations are depends on whether you consider the application to be an agent of the customer (like a personal shopper), or of the shop (like a salesman).

Did:

A - the programmer coerce the application (an agent of the shop) into accessing secret information (breaking into the shop warehouse), or

B - the programmer ask the application (his own agent, a personal shopper) to go and look for interesting things in the database (shop's warehouse) for him, a privilege that the application (personal shopper) was afforded in advance by the shop?

I personally think that A is a dangerous precedent to set for society. Treating any network-bound application as the agent of its creator would mean it was wrong to observe your computer (which you generally use for more than just accessing one online shop), and would therefore effectively kill FOSS.



He didn't ask the application though. He changed the application or used information in the source code in a way that the shop didn't intend. So B is clearly not the case.

But even just assuming that the fact that the app was using a key means that he can try to access other things with it is a dangerous precedent. Can I access your OnlyFans if you give me the password to your Netflix account and you use the same password for both? Can I access the company database directly if the app connects to it? There could be all sorts of confidential info on that server, perhaps the gossip of the customer service people about you or info about other customers.



It's not similar at all.

The key (connection string) was already given to him via the app and he was entering the house (database) on a regular basis.

This would be like mistaking a door for the bathroom but find a closet full of gold instead.



He had access to the key by looking at the source code. The key wasn't intended to be used by him manually.


I wonder if there was some kind of software license that stated that the developer was giving the user the key but the user wasn't permitted to use it. At least in that case, he would have known the company's intentions.

I don't think we can otherwise know for sure the company's intentions. If I leave the front door to my house open (or if I tape the key to the outside of the door, to further strain the physical key analogy), is it my intention that people just come on in? We have no idea.



Are you given a key if it is contained in the source code of a compiled binary that you are given?


Well, the key was already “given to him” by the nature of leaving it under the mat.

Oh, but this is a food truck that he’s been visiting on a regular basis so obviously he’s allowed to go in the back door, with full access to all the ingredients and poking around inside the cash register?

Also, if you take someone else’s gold behind a door you unlocked with a key that isn't yours, it is called stealing.



> leaving it under the mat.

Nothing was "under the mat" since he was already using his PC to connect to the database with those same credentials.

> if you take someone else’s gold behind a door you unlocked with a key that isn't yours, it is called stealing.

Nothing was stolen, he informed the vendor of the issue immediately.



But he had no right to examine the app, or enter the database outside the app. Even if that is a simple task for an expert, it's still an obvious difference between legitimate usage of the app, and illegitimate usage of the database.

Wouldn't that account as reverse-engineering, which is often also illegal?



We’re getting downvoted by the “I have no clue about how the criminal justice system works” brigade.


You are getting down voted by people who understand that, while the situation is nuanced, it is NOT equivalent to flipping over the house mat at random house and entering it.

It may not be totally akin to a security card opening more doors than it should, but it is entirely reasonable to assume that "the key in your copy of the app" is "your personal access key".



> it is entirely reasonable to assume that "the key in your copy of the app" is "your personal access key".

I've never been in a situation where I was given an app binary as a user but where the author intended for me to go find a key in the decompilation so that I could manually access information that wasn't visible in the app.

This seems like it is an unreasonable thing to assume. And courts seem to agree.



> This seems like it is an unreasonable thing to assume. And courts seem to agree.

Illegal vs legal and reasonable vs unreasonable are not entirely analogous.

There are lots of laws that I think are unreasonable and a lot of legal things that I think are unreasonable.



Well, apparently the courts think it is equivalent, at least in Germany and the United States.

That you favor one argument over another as your lens with which to side when it comes to really stupid analogies, including mine, is entirely in bad faith.



So your argument is that since the courts think a certain way, that we should accept that and move on?


I agree with the courts. Most people so outside of a vocal minority on this forum.

What is you plan if you disaggre?



It's just a fundamental disconnect then, I guess. You and I have a very different understanding of how this "crime" was carried out.

He was charged for connecting to a database with credentials that he was already connecting to before with the application. I could argue that he already had access to the other data because the connection allowed it.

So, that begs the question: what did he actually "break" into and how is it different than connecting to the database via the app on the same PC? Would you even be able to tell the difference as a sysadmin looking at logs?



If you want to argue about the law, you can! There is this entire profession dedicated to this!


If the mat was in their driveway, maybe.


> It is basically impossible to know what a system is without accessing it and looking around.

What reason do you have to needing to know what a system is? Just because you think you have a password for it?

> If security finds you in a room you aren't supposed to be in, is that your fault?

It depends. Do you know you're not supposed to be in that room?

> Should you be punished?

I've run into this exact same situation three times. One was a hard coded SSH key to a root account, two were hard coded passwords.

In all three cases, I simply contacted the vendor, let them know I had this key, coordinated disclosure with them, and then told them what the password was and where I found it.

In all three cases, the disclosure was enough for them to go wide eyed, immediately understand which systems were impacted, and then quickly leave the call to go fix the problem.

There is _zero_ reason for you to _use_ exposed credentials if you find them. It adds nothing to the "security research" you may be doing.



But in this particular case, it sounds like it wasn't known to him that it was an exposed credential. He thought it would just access his own data, so there would have been no reason to report anything to the vendor at that point. The access to protected data here was accidental.


He thought the credentials, which were hard coded into the app, for an mysql server, somehow accessed only his data?

That's hard to believe. Which seems like the defense understood, because they offered that the "name of the remote database" seemed like it could be related to his customer that he was contracted to.

In the end, he's going to pay 3,000 euro, and made an example of. He could have received 3 years in prison. So slightly unfair to everyone but hardly worth stretching credulity to defend.



Finding a key on the floor and then using that key to break into a building is illegal, and for good reason.


It's more like you're given a keycard to a building that says "Floor 20" but then find out it has access to all the floors and telling the building security about it. All they did was 'open' the 'elevator door' here revealing the access they mistakenly had. It doesn't sound like they snooped through other customers' data or downloaded anything.


You're not really given it though. You found it and even though nobody ever asked you to use the key card you used it on the floor that nobody ever asked you to go to.


Your machine regularly uses that key to access what's behind the door on your behalf; there is no reason you shouldn't be able to access it yourself.

If you don't find the key and realise it's actually a lost one, leading to a potentially dangerous place, someone else will and they won't be benevolent.



They weren't not allowed either, though.


Its not finding a random key on the floor unless the key was in his house sending data back to a 3rd party server. Closest paradigm using technology terms is finding an S3 key in a 3rd party library and then browsing the the S3 bucket to see whats in it. Authorization was granted by providing the developer the library, they literally sent him a username and password. If the code was unique to each client would the person been charged?


US law:

> If you did not have permission to enter [...] the likely charge is a misdemeanor charge of illegal entry (also known as entry without permission).

[1] https://www.avvo.com/legal-answers/is-it-breaking-and-enteri...



Yep, what the guy did here seems at worst a misdemeanor, and only because a company's feelings were hurt. The vendor brought charges against an employee of their customer, someone who was paying to use their product.


If you don't know what the key does who are you reporting it to?


If that's unclear the answer is simple, destroy the key. Otherwise you can try to be a good neighbour and let them know. You do not get to just open random doors and see what's going on.

The real issue here is whether this instance is comparable, not whether opening doors with lost and found keys is a bad thing.

The real difference is whether they 'found' the key or if they were handed it. In this case I'd argue they were handed the key, as there was no plausible protection mechanism preventing them from accessing the key. It wasn't lying around somewhere forgotten or secret, it was in plain sight.

And frankly we need some good Samaritan laws for cases where someone responsibly disclose a vulnerability without doing further harm, even if what they did was illegal on its own it certainly should not be in light of the fact that they responsibly disclosed the vulnerability.



It's not a "lost" key if it's found hardcoded in an easily available place (e.g. an application). It's a negligently placed key leading to a vulnerable place that is going to get into the hands of a malicious person.


Troy Hunt called that behavior "way too far into the grey for my comfort" in a recent post about the massive Naz.API leak.


In that case the keys he has were definitely not his to use. Here we're looking at someone handed an API key by a company and then using it to access the API.

A lot of this depends on whether you view a phone as a device running third party' programs on behalf of the user, or a device that third parties allow users to run software on on behalf of the third party.

A lot of society is moving towards the later view, which is of course fundamentally wrong.



Says the guy who gets emailed hacked database from cybercriminals


Right, the guy who's job is receiving hacked databases of user credentials from cybercriminals argues actually using said credentials would be going too far.


> It is like being given a key card for security clearance in a building.

Not really. Apparently they interpret the software which has the embedded key as an "agent".

So it's more like you go into a building and get assigned a person who opens the door and retrieves the stuff you want from there for you. Turns out that the person was on drugs and promptly fell asleep ("malfunctioning") and you take the key to get into the room but it turns out you've just witnessed that this company is a huge scam. Now they want to sue you.



It's rather different. Some time I saw my neighbour left the key sticking outside. Doesn't feel like an invitation to me. Also, garden doors aren't necessarily locked and I think this is difficult to legislate.

German law applies to TFA so compare Hausfriedensbruch (criminal code): the adverbs of choice are "widerrechtlich" like undefined behaviour; "ohne Befugnis", essentially without permission, e.g. in case of not a lawful entry of police. Official translation actually distinguishes "unlawful" and later "without permission". I always feel it says, like, illegal entry is illegal. Vandalism uses the same words, section 303a applied to computer sabotage as "Data manipulation".

https://www.gesetze-im-internet.de/englisch_stgb/englisch_st...

https://www.gesetze-im-internet.de/englisch_stgb/englisch_st...

PS: the relevant section is 202a "Data espionage", following another comment.

https://news.ycombinator.com/item?id=39047283

https://www.gesetze-im-internet.de/englisch_stgb/englisch_st...



This deserves further commentary.

In my humble opinion, what really grinds my gears is the abuse of the letter of the law, “circumventing the access protection”. If your fence has gaping holes, it's not a functional fence.

Since this is hackernews, graffiti "vandalism" is still a good example. The only protection of public facing walls is law enforcement, which is spotty. Private property such as trains may employ fences and security, which can be circumvented. Train stations and trains in service have to open anyhow. Terms of Service may explicitly forbid pollution, defacement, however you want to call it (this holds by analogy if you leave logs on the server, my point being, as it were, that security is a process).

The law makes a practical difference for each of these cases, but the spirit of the law is the same in each case and the baseline is that the law is whatever is deemed appropriate by the powers that be, the finder of facts, population as represented by select individuals, the common joe. This, in turn, is supposed to be enshrined in constitutions of sorts. In sum, “unlawful" (“widerrechtlich” or “unbefugt”) derives in different ways from constitutional rights.

In the given case, subsection 202a is based on confidentiality (Art. 10 GG "privacy of correspondance"), but in my example (guilty as charged) the laws against vandalism are based on property (Art. 14 GG). In result, your comparison is a type error for me (as is circumvent if access control is a process).

https://www.gesetze-im-internet.de/englisch_gg/index.html

Comparative Law is a real thing, by the way, that is most foreign to me, but I make due.



> Since this is hackernews, graffiti "vandalism" is still a good example. The only protection of public facing walls is law enforcement, which is spotty. Private property such as trains may employ fences and security, which can be circumvented. Train stations and trains in service have to open anyhow. Terms of Service may explicitly forbid pollution, defacement, however you want to call it (this holds by analogy if you leave logs on the server, my point being, as it were, that security is a process).

Grafitti satisfy the criterion of Sachbeschädigung (criminal property damage). Nothing (except some reputation) was damaged by the "hacking" involved here.



Well, depending on what kind of data was stored in the database he accessed, this may constitute a data breach according to privacy law in which the vendor also needed to assess whether the incident needs to be reported to its data subjects (i.e. all customers in the same database). Those could then possibly sue for damages.

Of course if that's the case the vendor would have to be found to be in violation of privacy laws by not using state of the art protections (e.g. not shipping plaintext passwords, not using the same database/credentials for data from different customers) and might be fined for that separately.



I don't see why an investigation into excessive logging requires running queries on a database.


If they connect with a desktop application, said application might run some queries against `INFORMATION_SCHEMA` in order to display schemas, tables, and columns. If the investigation is open-ended enough ("we have no idea so just figure it out"), then it might seem reasonable to connect to the database to see what it's about.

It's already hard to see the malice in their actions but it's harder still when I consider that they immediately alerted the company who made the error. Even more when I consider that the company fixed the error. This developer did the company a favor and they had charges filed against them over it.



It doesn't have to be malicious, it can just be negligent. What he did was essentially digital trespass. He assumed the database only contained data for his client but he knew the database was hosted and operated by the vendor and did not ask for permission to access it. Instead he analyzed the software (in the most basic way, i.e. opening it in a text editor and looking for plaintext strings) to retrieve credentials and used those for accessing the third-party system without permission.

It's of course ridiculous that the police and prosecution called this "decompilation" but I agree with them to a point: the password didn't spontaneously fall in his lap, he deliberately went out to look for it in the software itself, even if he found it in the most trivial way possible. And then he decided to use those credentials to access an external system he must have known did not belong to his client without asking for permission. This rapidly enters grey hat territory and crosses a legal line.

I think it's right to be appalled by the mere act of opening the file in an editor being considered suspect by the prosecution but I also think it's important to understand that it wasn't just the software analysis that got him into trouble, it was the digital trespass that this enabled.



This is a high profile case because they went public with it, but I assure you, stuff like this happens a lot more frequently than you might think (the public can attend court proceedings in that country, but they are neither published nor publicized for the vast, vast majority of cases). It's why anyone remotely familiar with the legal situation (not just in germany, many countries have similarly broad laws, like CFAA or the Computer Misuse Act) will tell you to never ever, ever do vulnerability disclosure yourself to the responsible party.


What should you do instead? How would someone familiar with the legal situation disclose a vulnerability?


Yes, he logged into the server using the credentials embedded in the app. Since the server contained information from other users, this would clearly be some kind of crime if used this access maliciously or maybe even if he just logged in knowing that he wasn't supposed to be allowed to.

But I think the salient point here is whether or not he could have known that before logging into the server. Since the credentials are in the app, should he assume that the company's security is so bad that this would give him access to all their customer data? He is obviously allowed to use the app, and the app uses these credentials so it's not too much of a leap for him to think that he should be allowed to use them as well.

Regardless, I think the result of this ruling will clearly be bad for computer security. In the future maybe someone who finds a vulnerability like this won't report it out of fear of legal retribution.



I guess the moral of the story is, if you find hardcoded credentials, immediately inform whoever is in charge without actually using the credentials.

Or can that still get you sued?



I think the moral of the story is if you stumble on such a vuln while working in Germany in the future its best practice to sell it on the darknet since you unknowingly already committed the crime anyway. might as well get paid for it.

Please Dont shoot the messenger, I didnt write the stupid law.



I'd probably not say anything. At most anonymously. I'm not taking the risk as i stand nothing to gain


It’s not clear, but what is clear is Cases like this can and often do have a chilling effect on legitimate, well-intentioned reporters of vulnerabilities which leaves everyone else at even greater risk due to negligence on the part of the company. We should be highly critical of these legal outcomes particularly when there was no intent to harm.


Not a lawyer, and certainly not in Germany, but spend a lot of time reading and noodling about this space. There's maybe a reach-y contract lawsuit if you violated reverse engineering terms; it wouldn't win, but it could be annoying and expensive.

Actually using the database creds to the point where you can tell a story about the data in the database though is enough to put you at criminal risk in the US; the DOJ doesn't prosecute good-faith vulnerability research, but depending on the kind of poking you do and the kind of logs you keep of what you find, you can put yourself in a position where your good faith isn't assumed.



I don't think the "hardcoded" part is at issue here. If this wasnt a MySQL database but an API that exposed other customer information, he would have the same moral duty to disclose and the same legal liability, I think.


My thinking is that prod credentials that you aren’t supposed to have were used. So if you’ve been ask to investigate, and see something this glaringly bad, then you need to stop immediately, inform your boss, and get explicit approval before continuing.


Maybe not? Again, only speaking to US law, but your intent matters a lot here, and you have more plausible deniability sending API requests than you do making a direct connection to a database.


> you have more plausible deniability sending API requests than you do making a direct connection to a database.

A direct connection to a database is an API, too. :-)



Not normally. Which matters here.


But in this case, it is, since this is the way the client connects to the (database) server here.


$15 what there is no law about illegally accesing information systems which even define API anywhere in the world.


What's your point? People have been convicted for doing things with actual APIs when prosecutors were able to demonstrate that a reasonable person would have assumed they shouldn't have done those things.


API or not API is a technicality which bothers only tech nerds. The law would bother with such nuances.


can't you just forget you ever saw the hardcoded credentials? that seems like the safest course of action to me.


i think it all comes down to:

it's not a crime to build a house that has open doors and windows.

but it's certainly a crime to enter one as an uninvited guest, let alone do things with traceable logs.



> but it's certainly a crime to enter one as an uninvited guest, let alone do things with traceable logs.

But this is the entire issue. It's common practice for a business to have open doors because they intend for anyone to come inside and patronize their establishment. Some of the businesses are even in residential houses, where the area is zoned for that sort of thing.

The question is what that's supposed to mean for a computer system. Obviously answering requests is the intended purpose of a public-facing internet server, and the general expectation is that if you're not allowed to make a particular request, the server will refuse it. Protocols even have widely supported standards for this, e.g. HTTP 403 Forbidden.

So what are you supposed to make it of it when you issue a well-formed request and the server answers it? The default expectation is naturally that they intended it to, because if it was intended to do otherwise then they'd have configured it to do otherwise. How it responds is how you know if you're allowed to do it.

At some point you may be able to reason out that what's happening is the result of a misconfiguration (exceptional circumstance) instead of the standard expectation (server refuses requests if server operator intended them to be refused), but this may not be obvious to the user until after it has already happened.



according to data protection laws, it's certainly a crime to leave some systems unprotected like in this case


please don't compare houses with databases, this is pointless and discussion usually degrades into house-related things


Do it anonymously if you do it at all.

In my opinion this is like filing criminal charges because someone opened a door at the front of your business. Normally what is known to your front end is not sensitive data for the entire user base. So if you take a peak in, its the same as wondering what the extra front door is to a brick and mortar store. You’ve got the main door with the OPEN sign and then a plain door that, whoops, is unlocked and has all of your customer’s files laying out on tables. At this point you’ve done nothing wrong. If you start rummaging around you’re outside of plausible deniability.



There's nothing confusing about it, it just doesn't frame it in the way you prefer.

From the perspective of the developer, it's natural to assume that the password was in place to prevent non-users from accessing, not legitimate users. After all, the credential wasn't hidden or obscured in any way. When it became clear that users weren't supposed to have access, it was reported to the vendor. Am I missing something here?

On one hand, there's a developer doing their job. On the other, there's another "embarrassed" company retaliating and intimidating would-be bug reporters. It seems crystal clear what's going on.



> but actually using them to poke around.

This is true, but he believed that the database was held exclusively for the client, hence only containing data belonging to the client, who gave him permission to access his data. Apparently the name of the database also seemed to indicate this.

As soon as he then noticed that it contained all the data of all customers, he disconnected.



It doesn't matter what he thought was in the database or what it was for. He knew it was hosted and provided by a third party for use with that third party's software which his client was using.

His crime wasn't accessing the data. His crime was accessing the data in a way he had not been authorized to do. As far as he was concerned, the investigation should have stopped at "there are hardcoded plaintext credentials here". But not only did he then also try if those credentials were correct, he also used those credentials to go spelunking. That's trespass even if he had reason to believe there were no other customers' data on that server.



I stand by the phrase "Hacking Is Not A Crime".

It's what you do with the data once you have access to it. If you do nothing, it shouldn't be a crime, the crime should be the, presumably, nefarious usage if used.



I would not recommend trying that defense in a courtroom.


You can do a lot of damage by simply accessing data: blackmail, state or industrial espionage, insider trading, HIPPA violations, obtaining signing keys or passwords for lateral movement, etc. All those require additional intent, to be fair, but it's hard to prove intent and much easier to prove access. And there are very few legitimate reasons to access someone else's private data, and many nefarious ones.


Are you hereby giving people permission to hack your devices as long as they only use it to do good?


If someone can suggest a better (more accurate and neutral) title, we can change it above.

(It's best to use a representative phrase from the article body rather than making up new language; that's usually, though not always, possible.)



That isn't hacking which the title implies. Hacking is more involved and exploitation of systems.

This is just taking the keys and unlocking the door to your benefit.



This is actually a big problem for Germany, because the cited StGB 202 ff. penal code paragraphs have made security research in any private sector shape or form impossible, or at least highly unattractive.

Now a gap of almost 20 years has opened, where basically no young engineers have been interested in the field, let alone trained. The biggest companies with the deepest pockets have been mopping up anyone they could find. Top talent went abroad. And so the majority of German businesses which are SMB get hacked more every day. Nobody audits anything. Unfortunately, anything networked is a security risk these days.

I caution that it is highly naive to bet on this getting thrown out at higher court levels. Defendant is looking at YEARS of wasted brain cycles, trying to go from AG to LG to OLG to BGH. My guess is a 100k EUR of fees also wasted. And for what. Because a company couldn't properly secure their data, you told them that, and as a "thank you", they sued you in court?

My advice: If there is no clear bug bounty program, or it is not your own company, or you weren't tasked in writing and paid by the very company to find any holes, don't make it your problem. Suppress any good samaritan helper complex you might have. Wipe all files and talk to nobody. Especially not in your place of employment. Once a lawsuit is involved, anyone questioned will say "Oh, Mike from DevOps figured that one out from the hexdump". You will regret it.

Some of the older German infosec dogs are aggravated by this so much, that they refuse to help any governmental organization if there is an incident. Lernen durch Schmerz.



No, he was found guilty for using those credentials to connect to the database. I can’t speak for German law, but at least in the UK this would be an open-and-shut case, it’s a clear violation of the Computer Misuse Act.

You can like that or not, but if you’re in the position to be doing research like this, you really ought to know the basics of the law.



> research like this

> His crime: he was tasked with looking into a software that produced way too many log messages.

The developer wasn't doing security research. It sounds like they just had a bug they were looking into. Connecting to the database and realizing what it is to immediately disconnect and report it responsibly shouldn't be something that comes with punitive measures. As another commenter pointed out, this incentivizes people to sell this knowledge to others who will actually "misuse" it.



Everyone coder in Germany should unionize and go on a password strike. Refuse to do any part of your job that requires authentication unless you have a signed written statement verified by a lawyer as to the scope and use of the access. And then only use that level of access.

Can you fix this bug? Sure, I'll be chilling on this sofa while you get me my access.



But turning on the app "uses" those credentials. so were all of their consumers guilty of hacking too?

What's the difference. MAYBE I could see this as a violation of the ToS but It's a far cry from "hacking".

Having a password doesn't mean they were trying to keep people out. They shipped the password.

That's like going into a building and they HAND YOU a keycard, and say don't go anywhere you aren't supposed to. And then it's actually a master key. How do you even know that it's going to let you into places you aren't supposed to go.

I have creds to googles services but it only gives me access to MY stuff.



> That's like going into a building and they HAND YOU a keycard, and say don't go anywhere you aren't supposed to. And then it's actually a master key. How do you even know that it's going to let you into places you aren't supposed to go.

And if you went somewhere you're not supposed to and found out it's a master key by trying it in those places you're not supposed to access, you'd be accused of trespass.

It's okay to argue that the punishment is excessive or that the law should factor in malicious intent more but the law is pretty clear and his behavior wasn't innocent white hat hacking even if he meant no harm.

> MAYBE I could see this as a violation of the ToS but It's a far cry from "hacking".

Maybe? How about definitely. He didn't use the app for its stated purpose, he extracted the credentials and then used them manually. That's a clear ToS violation at least. That this isn't a sophisticated hacking attack doesn't mean it is legal. You can argue it should be but it's easy to see why it isn't even if you just consider property law.

Actually the only problem with the law in my eyes is that it doesn't distinguish very well between malicious abuse (i.e. abuse with the intent to cause damage or impact security) and non-malicious abuse (e.g. building an unauthorized third-party client) and that it doesn't have special provisions to protect security research akin to whistleblower laws.



Sorry but you’re wrong. In the eyes of the law this is very clear cut.

Morally is another question of course.



Is there no mens rea requirement in Germany?


I understand, but that doesn't mean I don't want to yell into the void about it.


Sorry, but the law is wrong.


My house Door is usually open . That doesnt mean people are free to enter and use the toilet .


But that's not what's going on here. It's like you gave me a key to your house and said its so you can get your stuff. And then when I used the key and saw everyone else's stuff in the same room I let you know and you threw me in jail for trespassing with a key intended for me to use to get into the house.


I'm not sure whether it's that easy. AFAIK he had a customer that wanted him to investigate why the customer's system was flooded with some data. He ran the connector to some other service that the data seemingly originated from and observed a connection being opened to a remote MySQL server in plain text in his firewall. He took a look at this and saw that the credentials used were equal across all tenants of the MySQL DB. So it wasn't just his' customer's data that was exposed, it was the data of all tenants. AFAIK he then created some hashes of user data and exported this, so he could report this to the authorities and give users the ability to check whether they were listed in the system that had to be considered compromised. The DB exposed data of around 700k end users. He also contacted the company that runs that DB about this issue.

The vendor of that connector then issued a new client that used TLS, which he also circumvented to show that the issue is still valid. He is also accused of decompiling the client software to obtain the password. IIRC, he instead claimed to just have opened the file in notepad.



Sounds to me like the database credentials were embedded in the application so presumably the application would log in to the vendors server as an intended action. Does this mean all the vendors users must be charged with hacking also?


The clue’s in the name: “misuse”.


Is there evidence he misused the data or the server? Did he download all the data and sold to third parties, spammed the hell out of existing users or anything like that? How is verifying the credentials misuse?


He didn’t just “verify the credentials”. He was in the database making queries, viewing private data.


"Viewing" private data for purposes of verification of the issue. How about you just don't ship passwords in the application like some negligent troglodyte?


How do you find out it was private data without viewing it first? How is that misuse of data if you just view it?


I wouldn't say it's so clear, if you read the article it seems like the developer was investigating an issue and found the database credentials, assumed the database connection was single-tenant (or that the user would be limited by permissions) as the software was connecting directly to it, and used them. When they realised they had access to more data than intended, they disconnected from it.

I have done exactly the same thing in similar circumstances – I had a desktop software vendor that we had issues with, saw the config files stored database credentials in plaintext and connected to it. In my case, the database was single tenant for our company so I managed to get what I wanted done.

Surely intent must come into play when it comes to applying the law in cases like this? It doesn't seem like the developer had any intent to access a restricted system.



Or you just could go the whole other way. Don't report. Sell.


I wish more people would go the other way. Companies should hire people fulltime to find and report bugs.


INTENT is the keyword for most laws. If the intent here was to check the security it is 100% legal within EU. Don't know about UK. I guess the guy poked around.


The current German law says something different currently. Getting access to data by overcoming protection is a crime. It doesn't matter what you wanted to do with this data if anything at all.

I've read current government had some plans to fix it, but they have a lot to do at the moment.



Seems like those laws to need to be re-written. Intent matters and it doesn't seem like this "hacker" was trying to do any harm.

Company got caught with their pants down and want to punish this person for exposing that.



Yeah it is a chilling effect that will make German systems less secure, and other countries who are immune from German prosecution are going to exploit that. This story alone makes me refuse to work in Germany as a developer let alone in security.


Hackers somewhere in Novosibirsk who look for ways to disrupt or exfiltrate data from some of the Germany's bigger IT systems, are likely very gleeful now.

Look ma, they just leave their passwords in cleartext, and people are scared shitless to report it, lest they be sued! It's a pure gold mine!



I agree, this conviction is really about challenging corporatism and making them look bad. They really want "the peasants" to know their place and not peep through the nobles' windows. The state will almost always side with those with the most money, unless lawyers somehow shine the light of public opinion on it, then they might have a chance. This is why you do stuff anonymously.


Sounds similar to a case where I am from:

https://www.techdirt.com/2022/02/25/turns-out-it-was-actuall...

The "hacking" was decrypting social security numbers from BASE64.



Funny message in the encoded Base64 on that article, which reminds me of a musing I had a while back.

Imagine the number of lazy programmers who paste stuff into an online Base64 decoder. Imagine all the stuff that is in those payloads!

Running a site like base64decode.org would be a fantastic honeypot.



That's why many large firms block online code assist tools.


Yeah, this is exactly the case that the headline reminded me of (I got instantly downvoted for commenting about this for some reason). If they had actually encrypted the data it would have been fine, but BASE64 encoding is not encryption. It's trivially easy to decode base64: https://developer.mozilla.org/en-US/docs/Glossary/Base64#the...


So many "hacks" are the equivalent of some fool left the front door wide open. If you left your front door wide and were robbed the public would have 0 sympathy for you yet people scream at the Hackers when these companies cheap out and don't do shit to update/maintain/enforce basic best practices around security.


It's not about "sympathy". It's about crime.

If you left your front door wide and I robbed you, I'd have committed a crime. There's no "but the front door was open" defense.



But nobody was robbed. This guy looked in the house and then immediately called you to tell your door is open. Instead of thanking him, you sue him for trespassing. Technically, he would be guilty of trespassing.


Private residentce is a bad analogy. This is like being sued for trespassing because you stepped over a green-grey marble line on the grey-green marble floor of the open ground floor lobby of an office building where you work.


It’s like telling someone the pin to your safe and then suing them for them opening it. They didn’t even steal anything. They just opened the safe that you gave them the pin for.


> If you left your front door wide and were robbed

It would still be a crime. I would and should be chastised, but the person who robbed me should still receive a proper punishment.



Where I live, walking in the open front door of the house of a stranger is rather antisocial, but not in itself criminal. The real world analogy fails, because this person didn’t take anything (copying isn’t taking since the owner is not deprived of the original)

Even unlocking their door with a key you found lying in the street, and then going in, is not in itself criminal (where I live). If you go on to commit a crime while inside, or did it with the intention to commit such a crime-that added fact makes it a crime (“break and enter”). But mere unlocking the door and entering in itself is not.



> If you left your front door wide and were robbed the public would have 0 sympathy for you

Not sure where you live for that to be the case, but someone coming in because I left my door open is not normal, even if I left my door open. Even if they claim they were "making sure everything was safe".



Normalcy doesn't matter. I think the point is that you're not going to get much sympathy if you leave your front door wide open, leave for work or better yet go on Christmas vacation and have no sign anyone is home, and then come home to find something's been stolen. Maybe a better analogy is leaving a laptop in your car overnight and leaving your car unlocked parked on a busy street.

Obviously stealing is a crime and we shouldn't victim blame. But with a lot of software the business isn't much of a victim so much as their customers are, and there doesn't seem to be much incentive for companies pro-actively securing their software. You could argue in hindsight the developer would've been better off selling the vulnerability and/or data to the black market rather than reporting it.



Isn't this the opposite of good samaritan laws? If you see something, say, nothing, do nothing.

I wonder, if it's illegal to find these problems, would it be legal to notice there might be a problem, stop, and short the company stock?



Tor + twitter (if you can actually register there with tor):

"Hey, I just happened to find out that there is a password here in this app, at offset X, here's the screenshot from the hexdump with the visible password... I'm not allowed to check what that password is, even though there is also a username and a host next to it, and clear indication that it's an sql connection, but i'm not testing this, but i'm warning you, the general public, that this here exist, please don't try connecting to this IP using this username and password, thank you!"



Issue is those flaws can go unnoticed for years, so you may need to give them a nudge for your short to be successful.

There’s also the fact that at least for US companies massive data leaks/breaches often have no negative financial impact on the company.



> would it be legal to notice there might be a problem, stop, and short the company stock?

Yes. As long as you don't use inside information this would be perfectly legal. It's pretty much what companies like Hindenburg Research do.

The problem you would find if you actually tried to do this is that investors pretty much don't care about security issues, so the stock price wouldn't go down after you revealed the flaw. That's even if they're publicly traded which doesn't appear to be the case here. I think it's these guys but I don't speak German so don't quote me: https://www.modernsolution.net/



Better to just pretend you never noticed it. Even telling them their password is visible is risking being the messager that gets shot. Using the password is right out.


Paragraph 202a of the criminal code:

https://www.gesetze-im-internet.de/stgb/__202a.html

Roughly:

„Gaining access to data that is protected with special methods against unauthorised access, either for personal use or for others“

So apparently, hardcoded passwords baked into the client do qualify for that.



Yeah. It's well known as a really shitty law, which should never have been passed. But here we are. Maybe until 2050 they fix it or so.


§ 202c is even worse:

> https://www.gesetze-im-internet.de/stgb/__202c.html

English translation based on the DeepL translation:

"§ 202c Preparing the spying and interception of data (1) Any person who prepares an offense under § 202a or § 202b by

1. passwords or other security codes that enable access to data (§ 202a (2)), or

2. Computer programs whose purpose is the commission of such an offense,

or by procuring, selling, transferring, distributing or otherwise making available to himself or another person, shall be liable to a custodial sentence not exceeding two years or to a monetary penalty.

(2) § 149 (2) and (3) shall apply accordingly."



The silver lining is that the punishment for crimes in Germany is generally extremely lenient. The article mentions a 3000 Euro fine plus legal fees.


Had a food startup in the Netherlands.

Worked with PostNL, the main and previous governmental organisation for sending mail.

Weekly we would upload our orders in their system; and could see our history.

Then one day we could suddenly access all other clients history and export their users data. Many of them direct competitors, and their mailing lists would have been quite valuable to us.

My partner exported all Marley Spoon's (a bigger funded competitor) data in excel and a few others. When he told me I told him to delete it ASAP, even though it's fun you don't create a liability. But we could have used it to grow 10-30% in a few weeks.

They never reported it, which they were legally obliged to do under EU law.

All to say, if you get the keys to the castle, maybe don't use them. Or maybe you do.

We should, and could have used it, in price negotiations since they almost doubled the prices to us for the next few months and didn't have any mercy. Let alone misplacing 3-8% of our orders and not refunding.

But instead we moved to few other delivery services (with all their own flaws)



I think a lot of issues are never reported because of stuff like this. We hear about white hat hackers getting sued all the time. And in the end it'll hurt us all because crooks don't care, and will use these found-but-undisclosed security holes to their advantage. Then, when they threaten the company to either pay or get a public dump of their wide-open database, management refuses and gets a slap on their wrist by the government once it's released - and customers receive a "sorry, we got hacked"-mail between a bunch of highly personalized phishing mails.


The law shouldn't be involved in this. Fix your systems. Tax payers should not be forced to defend poorly designed systems.


> The law shouldn't be involved in this.

There were huge public rallys when the § 202c StGB (https://news.ycombinator.com/item?id=39047767) was to become introduced.

In Germany, we learnt that resistance and rallys against the politics is typically futile.

Thus, in the recent years in some circles in Germany it has become actually fashionable to speak "Politiker" (politician) in a tone as if you were speaking of a mass rapist or child abuser (and covertly do this word replacement in your head). Believe me: in some circles in Germany, the fury against the politicians of basically every party is insane. :-(



This is like giving someone a book you wrote to proofread, with your password unintentionally in the text. They use it to login and then tell you about it. Sure, they shouldn't have logged in, but it doesn't feel like it deserves criminal charges.


Should have sold the information on a forum like any reasonable person would.


Sounds like they're a customer of the vendor, so they'd be pwning themselves if they did so.


Employee of the customer of the vendor!


The laws as structured and enforced will teach people to do this in time.


Good marketing pitch for disclosing to the highest bidder rather than responsibly to industry, gonna go to jail you might as well get top dollar for your trouble.

Oh no we did a terrible job and hardcoded credentials, someone found and tried a password and then reported it without stealing or destroying anything but our egos... the horror. Let's run to the cops and potentially ruin a life.



It's a very fine area. Once he had the database credentials, that's all he needed to tell the company to fix their code. Connecting to the database is what did him in.

We need white hats that want to find vulnerabilities for good, but when you exploit a target and they aren't aware until after the fact, that's still a crime. I don't know what the safe way of doing this is other than only doing white hat hacking on systems you control. Any system outside of your control should not be exploited unless the company has an agreed upon contract that indemnifies you from any harm caused.



Alternatively, reporting the vulnerability is what did him in. This seems to encourage an adversarial environment where no one will report any vulnerabilities they find for fear of repercussions. If their good faith efforts will be used against them, they may as well act in bad faith.


The developer claims that he assumed that he assumed that the database credentials were specific to the customer he was helping debug the problem. Once he realized that he could see other data, he closed the connection and notified the company.

I'd argue that a customer who accesses their own data on a vendor's database via a client has also the right to access it via a different client.



For more context: His crime was to be a business competitor of this shady company Modern Solution, which shipped the password of all its customers. He was not an independent security researcher.


This is outrageous.


No it's Germany. That law is a seal of quality on any germ software. So bad security wise, they need obscurity by lawzyness as main protection.


Precious Germany. Swiss cheese security enshrined and protected by law but will blow an absolute fucking gasket if your open source software is even remotely applicable to national defense.


Explain the latter part?


See NixCon 2023, but also in general universities and student organizations in the country mandating that you pretend the defense industry doesn't exist if you want to have a presence there.

Just about anything German-hosted in the Fediverse is absolutely allergic to the industry and users will lose their fucking minds and hound you to the ends of the earth and send you death threats if you forget this fact.

Basically shitty ideologues who put purity ahead of the technology dominate every level of the discourse there. It wastes everyones' time and distracts from things that actually matter. In fairness, America isn't far behind.



> NixCon 2023

what happened there?



https://discourse.nixos.org/t/nixcon-2023-sponsorship-situat...

TL/DR: they accepted sponsorship by Anduril, a weapons manufacturer and publicized that fact 3 days before the conference. The German scene is quite pacifistic and threw a fit.



While "threw a fit" is accurate, it really does downplay the level of hostility that was expressed in response to the situation.

It was unbecoming of any group that wishes to call itself a community and it certainly has chilled participation from reasonable people.



So what happens when the vuln is accessed from someone not under German legal jurisdiction?


Vee schend zem a sthrungly vorded letter!


If you needed security you should've used a fax!


After reading the text I predict this is completely sensationalised and that something worse happened.


It's a lowest level court, they're known to have wildly absurd legal opinions at times. Having stupid laws on the book (like the one in question) doesn't help.

The curious bit is that this law is from 2007, so apparently this is an angle that escaped all attorney and courts who applied this law in 16.5 years, or the defense could have shot down this line of reasoning by pointing out that this isn't what the law intended. (we don't have case law, but there are means of harmonizing outcomes once stuff ended up at higher level courts)

My guess is that this won't hold up for long given the circumstances (trivially got the password, accidentally gained more access than expected, immediately disconnected upon notice)



I've been following this for a long while now, as I frequent one of the forums that this person is also active on.

Even if the sentence is overturned by a higher instance, the confiscation of all devices for months and the additional legal trouble have made pretty sure that this person will not make the same "mistake" again.

The company's public statements during the whole affair were another story entirely. For these alone they'd deserve the next guy to just sell the credentials on a forum and have them blow up.



According to the linked heise.de article the defendant assumed the credentials were for a database that only contained his client's data, and immediately disconnected as soon as he realized he was actually seeing data for all of the complainant's clients.


Ah, the infamous "hackertoolparagraf". The hacking tool in question? According to [0] it was phpMyAdmin.

0. https://nitter.net/der_sofc/status/1747644600469127386

edit: Apparently der_sofc is the person who got sued.



This case has been going on for a few years.

Last summer the court declined the prosecutor's case (in this system, the prosecutor files their case with the court, and the court does a quick scan and will dismiss the case before scheduling the trial if it's obviously unsound - happens fairly rarely). Prosecutors got this overturned by a higher court, which means this trial happened at the same lower court, but with a different judge than the one who initially dismissed the case.

> According to a decision by the Jülich District Court on May 10, 2023, the criminal proceedings against the security researcher have been dismissed. The court assumes that no criminal offense has been committed because the data accessed by the security researcher was not sufficiently protected. "Only data that is specially protected against unauthorized access is subject to the scope of protection of the criminal offence. This presupposes that measures have been taken that are objectively suitable [...] to prevent access to the data," the court's decision states. "The court does not agree with the opinion of the public prosecutor's office that password protection as such is sufficient. A password does not always provide effective data protection, for example if it is too simple or is used in a standardized way for certain applications. In such cases, the provision of access to data does not constitute an offense."

> Through its own investigations of the Modern Solution software, heise online was able to confirm that it did indeed contain a built-in default password. This meant that anyone who had examined the software, which was freely downloadable from the company's website, would have had access to the data on the Modern Solution servers.



The German article link gives a few nuances more: first the ruling is not yet binding and has chances of going into revision. Secondly the fine seems partially due to a public disclosure to press while offering competing services.


Isn’t it the vendor that should be immediately countersued by the customer for the lack of any actual protection?


Title could be read as exposing by means of an app dedicated to exposing said credentials. Hackernews suggests something else, but hasn't really outlined the expose-action.

The text of the decision should be paramount.



No good deed goes unpunished.


Can we flag posts for clickbait headline?


so he would have been fine if he didn't check the vendor database? or if he got written authorization to check the database?


What a sad case and i think nobody should do business with this corporation ever again.

Next time: use a vpn, get yourself a anonymous email address and report it to your goverment dataprivacy office. Make it hurt. Make it hurt bad.



Next time: never report anything. Sell it on some black market, then forget everything about it.

Alternatively, get a sockpuppet account, publicize the information anonymously somewhere, then pretend you accidentally stumbled upon it and sue the vendor for gross negligence with your data. Go on the offence. Germany is so anal on privacy laws that I suspect the whole case has been hinging on the company making the first move. Keep sucking those sweet damages. I'm surprised there's not a whole fucking industry around this behavior, which is way too common to go unpunished.

Don't be a boy scout. This seems to be frowned upon these days.



meh, low level court decision, this will not stand in the higher court.


If you ever wonder why Germany is trapped in a predigital state since the late 2000s: Things like this are the reason.

If met people from all over the word, some of the coolest hackers and devs were from Germany, but: The perpetual effort of the German government to make all things „safe „ and „stable“ hinders the evolution of the country into something greater than a nation of car manufacturers.



> If you ever wonder why Germany is trapped in a predigital state since the late 2000s

I think the term "(pre)digital" does not fit: for example CDs and punchcards are clearly digital.



A colleague had a valid German driver license with a photo stapled on, way into the 21st century. Nice.


Here's a good article on the problem. Seriously, given how excellent most other qualities of life are in Germany, and how smart/educated Germans are, the Internet situation is jarringly bad (for anyone who visits or emigrates, and isn't used to it).

I think there's even a bit of pride about it, I hate to say. Germans are pretty proud of their outdoor activities and general physical health, and "device obsession" works directly against that... and is still not as much of a thing there as in the US for example. You could make an argument for it...

https://www.settle-in-berlin.com/why-is-internet-so-bad-in-g...

tl;dr Blame Helmut Kohl. Helmut was clearly the type of guy who would have printed out his emails (if he even had to email, if perhaps only by necessity) until the day he died.



> Here's a good article on the problem. Seriously, given how excellent most other qualities of life are in Germany, and how smart/educated Germans are, the Internet situation is jarringly bad (for anyone who visits or emigrates, and isn't used to it).

> I think there's even a bit of pride about it, I hate to say. Germans are pretty proud of their outdoor activities and general physical health, and "device obsession" works directly against that... and is still not as much of a thing there as in the US for example. You could make an argument for it...

Honestly, I am not sure what kind of "device obsession" in other countries vs Germany you are talking about. My impression, as a German, is that many German people value other qualities of technological products than what is valued in other countries.

For example, many German customers value long-lasting, robust products instead of the latest fad that will be out of fashion in a few years. For example, many Germans who are able to afford them would love household appliances built by Miele. Also, because of the German history (two dicatorships on German soil of which one ended little more than 30 years ago), many Germans are much more suspicious of "spying devices" (e.g. internet-enabled home appliances (IoT)) and things that might track you (this is also a reason why many Germans strongly prefer paying cash).

But it is nevertheless my impression that many Germans nevertheless do have quite some love for devices that do fit their values; it's just that the taste is quite different from the taste in other countries.



That's a great point. I was trying to strike a noncritical tone, believe it or not. Germans, like everyone else, are shaped by personal and shared experiences of the past.


We, as an industry, need to create an institution that lobbies for fair laws on the handling of computer crimes as well as creates public awareness campaigns that help people better understand cyber crimes. I'd like to see "Good Samaritan" laws for cyber crime.

This kind of thing is what enables cyber crime.

I personally find lots of bugs with APIs, since my job involves dealing with so many of them. I basically don't report them for fear of prosecution. There's already a fear in the back of my mind when I'm trying to work around such bugs that someone will come after me, but at least I have some plausible deniability to say, "I just wrote shitty software." Whereas, if I report a bug, that means I knew about it and admit to "probing" it to elicit more information.

I literally spent 4 hours this morning working around a vendor API bug.



That's what the EFF does.


I don’t get how the tech sector as a whole is so inept at this after half a century now.

EFF plays defense, when it should be playing offense more effectively

Organizations like it need to be donating to campaigns

Play the game until the lobbying laws change

All of our people should have been pardoned because the President was in our pockets.

Aaron Schwartz would have had nothing to worry about, maybe that gets someone attention here



Would that help the German developer though?


I struggle with the use of the word "hacking". Sometimes we want it to mean a penetration that requires exceptional knowledge and effort. Sometimes we just want it to mean "fiddled with the internals".

I once did a search in the free version of Feedly. They showed me the real search results, behind a "this is a paid feature" overlay. I submitted feedback saying they should either provide the feature or not provide it - and refrain from this in-between teasing. I mentioned that I deleted the overlay in the HTML to see the results, and they told me I had "hacked" the web app in their response.

That usage, and this usage, are ridiculous, because they imply an unscrupulousness that isn't present. And yet applying the friendlier meaning of the word, as in "Hacker News", I think is reasonable in both cases.



> I struggle with the use of the word "hacking". Sometimes we want it to mean a penetration that requires exceptional knowledge and effort. Sometimes we just want it to mean "fiddled with the internals".

"Hacking" means "getting (typically) technology to do things that they were not intended to do, sometimes in a playful way". The other meaning was purposefully disseminated by the mainstream media to spread fear and hate against the hacker scene, because their knowledge about programming, computers and technology was a thorn in the side of specific groups in power.



Okay, but that clearly cannot be the meaning being used here, since to prohibit "hacking" in this sense is not unlike a blanket prohibition on "play".


Did you read the original link: https://infosec.exchange/@WPalant/111776937550399546

"Current news: A court found a developer guilty of “hacking.” His crime: he was tasked with looking into a software that produced way too many log messages."

Note that “hacking.” is in quotes, which should clear the opinion that the poster "Yellow Flag" considers it to be ironic that this action is called "hacking".



Yes, and I also skimmed the German source. "Hacking" is in quotes because that's what they called it in Germany. Yellow Flag puts "hacking" in quotes to make it clear that they're citing a source and that they're not calling the action "hacking" themselves. It doesn't necessarily imply a sense of irony, they're just reporting a piece of news.


>Sometimes we want it to mean a penetration that requires exceptional knowledge and effort. Sometimes we just want it to mean "fiddled with the internals".

oo-err



Now that I read it...


After moving to the UK, it has been my policy to pretend to see nothing.

During my first week in university, I found a vulnerability in two of their servers allowing me to execute arbitrary code/commands + escalate to root due to a very outdated kernel.

I reported this to a lecturer and was immediately told that what I did was illegal and not to poke at any of their services. Last I checked, it still hasn’t been fixed.



> I reported this to a lecturer

I wonder if such reports would be taken more or less serious if it was made anonymously.



The ostrich approach to tech security.


If it ain't broke don't fix it ;)


Yes it is illegal, wont stop someone. Fools.


Greetings from Germany, I would not be surprised in the slightest if German legislation considered pressing F12 in your browser looking at the HTML was considered "hacking".


Where do you think Germany is, in Missouri?

https://www.theverge.com/2021/12/31/22861188/missouri-govern...



Something similar happened in Missouri a few years ago when the state's web developers leaked thousands of social security numbers in the HTML of one of their websites. A reporter noticed the flaw, reported it to the state government, waited for them to fix it, and then finally publicized the security issue. The governor (Mike Parson) accused the reporter of "hacking" because he clicked the view source button in his browser.


Ah Germany, where everything is precisely regulated, even the use of kitchen knives, which are of course unlawful to use outside of the kitchen. Nur in der Küchen schneiden mit dem Küchenmesser!


> So he immediately informed the vendor – and while they fixed this vulnerability they also pressed charges.

Germany has an obsession with accusing people of crimes. Perhaps a projection?



Ah yes, the homogeneous entity of germany born in 1939...

To the bottom you go.



I am confused by your comment.






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com