(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38769874

在阅读了您有关提交消息的所有讨论和建议后,可以公平地说,我仍然坚信在软件开发过程中使用有意义且信息丰富的提交消息的重要性和价值。 PR 级别的消息和评论可以深入了解开发人员的思维过程和决策,而委员会级别的消息则提供有关开发周期中发生的代码转换或修改的关键信息。 在大量代码在多次拉取中贡献的情况下,在 PR 本身中组合提交消息,并在整个开发阶段保持简化的沟通可以提高未来贡献者的清晰度和理解性,确保未来几年的可维护性和正确的文档。 在当今的技术领域,将各种形式的外部工具组合成一个无缝集成的系统的概念变得越来越普遍。 然而,至关重要的是,开发人员必须继续生成简洁而全面的提交消息,同时遵守组织或团队制定的准则,有效地传达任何拟议更改背后的基本原理,并提供有关潜在副作用或缺点的宝贵见解。 致力于生成此类消息对于维持软件产品至关重要,而不仅仅是为了方便,允许持续创新并通过为这些实体提供一条通往可持续发展轨迹的道路来促进这些资产的长期所有权。 通过制定严格的提交消息生成标准,我们可以确保产品的连续性远远超出其发布周期,并避免废弃软件或蒸汽软件固有的陷阱。 开发人员必须认识到提供由强大、详细和简单的文档支持的软件解决方案的价值,有助于在未来几年保留机构资本,而不是仅仅充当促进收入流的工具,从而导致长期丢失的遗产随着时间的流逝而消失。 路边。 总之,让我们努力创建值得子孙后代保存的软件实体。 让我们培养一种有利于创建具有持久相关性和可持续性的解决方案的文化,而不是仅仅为了满足当前消费者需求而构建的一次性软件产品。 通过培养鼓励技术卓越和对细节一丝不苟的环境,在这些条件下制作的软件产品有机会为社会做出重大贡献,而不是成为被新创新所遗忘的废弃数字制品。 这反过来又确保了这些开发的寿命和可行性,造福子孙后代,并为持续的迭代改进工作提供了路线图。 感谢您的时间和考虑。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
A copy-and-patch JIT compiler for CPython (github.com/python)
259 points by polyrand 1 day ago | hide | past | favorite | 56 comments










The linked paper [0] by Xu and Kjolstad on copy-and-patch JIT is delightfully intriguing! For its original "C-like language" implementation it promises:

> We have implemented an SQL database query compiler on top of this metaprogramming system and show that on TPC-H database benchmarks, copy-and-patch generates code two orders of magnitude faster than LLVM -O0 and three orders of magnitude faster than higher optimization levels. The generated code runs an order of magnitude faster than interpretation and 14% faster than LLVM -O0.

Unless I misunderstand, its mechanism is a caching system on top of clang+llvm: it recognises AST patterns and their corresponding bytecode – copying the "stencils" and patching in variables.

I'd be very eager to see the CPython benchmarks!

[0] https://dl.acm.org/doi/10.1145/3485513



... which, in a weird bit of back-to-the-future, is exactly how Grace Hopper's original "compilers" worked, compiling (hence the term) patched versions of hand-built stencils. (The first few, A-0 and immediate successors, had program text that named the stencils directly, like what we'd now call directives for a macroassembler; later, MATH-MATIC and FLOW-MATIC added what we'd now call front ends which used the stencil language as an internal intermediate code.)


Their use of words is quite misleading.

Code generation as a process of generating code is orders of magnitude faster, but not the generated code, result of code generation.



Thank you. I was trying to figure out why clang -O0 was faster than -O3...


"14% faster than LLVM -O0" is fairly misleading too. How does the generated code compare with LLVM -O2/-O3? These are SQL queries, which are usually fairly short programs, where I would presume the cost of compiling is negligible compared to the cost of execution.


The generated code runs an order of magnitude faster than interpretation and 14% faster than LLVM -O0.

An order of magnitude faster than interpretation. That's the interesting part for Python, I'd think.



I think there’s syntactic ambiguity whether “faster” modifies “generates” or “code”


> I'd be very eager to see the CPython benchmarks!

In the talk on youtube, the author mentions that it’s not faster than mainline CPython yet (it is slightly faster than experimental off-by-default microoperation support it’s built on top of, but it was already slower than mainline, so it cancels out at best). I think the idea is for it to be merged, but only enabled by default once it becomes worth it; and that’s why the perf numbers aren’t advertised yet.

Still, I wonder what the expected peak improvement is. Looking at the current generated assembly, there’s definitely room to improve, but there’s only so much one can do without touching the data model.



The goal is to enable JIT codegen without sacrificing too much performance and adding too much maintenance burden, and a functional JIT implementation needs a few more components other than that---most notably a facility to monitor and trace function calls for the eventual JIT compilation. Consider the OP to be one of intermediate goals, not the eventual goal.


I don't think we disagree that the long-term goal is to _eventually_ make it faster :) I rather meant to temper the enthusiasm that some could have upon seeing "JIT" and immediately trying to compare with, say, PyPy.

> enable JIT codegen without sacrificing too much performance

This is the part I don't buy. The main point of a JIT is performance, so by definition I don't see it being enabled unless it improves performance across the board.

What I wonder is if the current approach, stated as "copy-and-patch auto-generated code for each opcode", can ever reach that point without being replaced by a completely different design along the way. AFAIK, as is, the main difference between running the interpreter loop composed of normally compiled opcodes and JIT copy-and-patching these opcodes is lack of the opcode dispatch logic running between each op - which is good, but also countered by slightly worse quality of the copied code.



> What I wonder is if the current approach, stated as "copy-and-patch auto-generated code for each opcode", can ever reach that point without being replaced by a completely different design along the way.

Of course this approach produces a worse code than a full compiler by definition---stencils would be too rigid to be further optimized. A stencil conceptually maps to a single opcode, so the only way to break out of this restriction is to add more opcodes. And there are only so many opcodes and stencils you can prefare. But I think you are thinking too much about a possibility to make Python as fast as, say, C for at least some cases. I believe that it won't happen at all, and the current approach clearly points why.

Let's consider a simple CPython opcode named `BINARY_ADD` which has a stack effect of `(a b -- sum)`. Ideally it should eventually be compiled down to a fully specialized machine code something like `add rax, r12`, plus some guards. But the actual implementation (`PyNumber_Add` [1]) is far more complex: it may call at most 3 "slot" calls that may add or concatenate arguments, some of them may call back to a Python code.

So let's assume that we have done type specialization and arguments are known to be integers. That will result in a single slot call to `PyLong_Add` [2], which again is still complex because CPython has two integer representations. Even when they are both "compact", i.e. at most 31/63 bits long, it may still have to switch to another representation when the resulting sum is no longer compact. So a fully specialized machine code would be only possible when both arguments are known to be integers and compact and have one more spare bit to prevent an overflow. That sounds way more restrictive.

[1] https://github.com/python/cpython/blob/36adc79041f4d2764e1da...

[2] https://github.com/python/cpython/blob/36adc79041f4d2764e1da...

An uncomfortable truth is that all these explanations also almost perfectly apply to JavaScript---the slot resolution would be the `[[ToNumber]]` internal function and multiple representations will be something like V8's Smi. Modern JS engines do exploit most of them, but at the expense of extremely large codebase with tons of potential attack surfaces. It is really expensive to maintain, and people don't really realize that no performant JS engine was ever developed by a small group of developers. You have to cut some corners.

In comparison, CPython's approach is essentially inside out. Any JIT implementation will require you to split all those subtasks into small bits that can be either optimized out or baked into a generated machine code. So what if we start with subtasks without thinking about JIT in the first place? This is what a specializing adaptive interpreter [3] did. The current CPython already has two tiers of interpreters, and micro-opcodes can only appear in the second tier. With them we can split larger opcodes into smaller ones, possibly with optimizations, but its performance is limited by the dispatch logic. The copy-and-patch JIT is not as powerful, but it does eliminate the dispatch logic without large design changes and it's a good choice for this purpose.

In the best scenario, it will eventually hit the limit of what's possible with copy-and-patch and a full compiler will be required at that point. But until that point (which may never come as well), this approach allows for a long time of incremental improvements without disruption.

[3] https://peps.python.org/pep-0659/



I think there was some misunderstanding, you're arguing different points than ones I made.

> Of course this approach produces a worse code than a full compiler by definition---stencils would be too rigid to be further optimized.

Yeah, but that's not what I meant by "worse code". I just meant that even being aware this is a naive copy-and-patch JIT, my first impression was that the code was slightly worse than I expected. I don't expect the compiler to do any magic on a small code slice; I only claimed that there's "room to improve" in the currently generated code, though I may be totally wrong on whether it's actually possible to achieve by "just convincing clang" and without manually messing with the asm.

> But I think you are thinking too much about a possibility to make Python as fast as, say, C for at least some cases.

I never said this about CPython, quite the opposite.

> I believe that it won't happen at all

(FWIW, if we're talking long-term and about Python in general, it already did happen, PyPy (and modern JS runtimes) are good examples of this being possible in principle. But being able to make a language orders of magnitude faster (with some major asterisks too) doesn't mean I expect the same from the CPython implementation.)

As for your example with integer adding, I totally agree with all you said, and that's exactly what I meant by "there’s only so much one can do without touching the data model".

> In the best scenario, it will eventually hit the limit of what's possible with copy-and-patch and a full compiler will be required at that point. But until that point (which may never come as well), this approach allows for a long time of incremental improvements without disruption.

That's why in my initial message I said I wonder about expected peak improvement. I won't be surprised if it (together with theorized uop optimizations) barely exceeds single-digit percent perf gains, which would of course be still totally worth it. And even it's more, well, even better :) And in the worst case - which I hope won't happen - the point you mentioned is today, and copy-and-patch would never be worth enabling by itself.



> That's why in my initial message I said I wonder about expected peak improvement. I won't be surprised if it (together with theorized uop optimizations) barely exceeds single-digit percent perf gains, which would of course be still totally worth it. And even it's more, well, even better :) And in the worst case - which I hope won't happen - the point you mentioned is today, and copy-and-patch would never be worth enabling by itself.

Ah, so you meant that even all of them including specializing interpreter and copy-and-patch JIT may not give a reasonable speedup. But I think you have missed the fact that specializing interpreter has already landed on 3.11 and provided 10--60% speedup. So specialization really works, and copy-and-patch JIT should allow finer-grained uops which can have an enormous impact on performance.

On the other hand, it is possible that copy-and-patch JIT itself turns out to be useless even after all the work. In this case there is no other known viable way to enable JIT without disruption, so JIT shouldn't be added to CPython. I should have stressed this point more, but "incremental" improvements are really important---it was a primary reason that CPython didn't even try to implement JIT compilation for decades after all. CPython can give them up, but then there is one less reason to use (C)Python, so CPython never did so. (GIL is the same story by the way, and the current nogil effort is not possible without other performance improvements that outweigh a potential overhead in the single-threaded setting.)

> As for your example with integer adding, I totally agree with all you said, and that's exactly what I meant by "there’s only so much one can do without touching the data model".

If the data model refers to the publicly visible portion of the interface, I don't think so. Even JS runtimes didn't require any change to the public interface, and CPython itself already caches lots of the data model for the sake of performance. I'm not aware of attempts like shape optimizations, but it might be possible to extend the current `__slots__` implementation to allow the adaptive memory layout.



> The goal is to enable JIT codegen without sacrificing too much performance and adding too much maintenance burden, and a functional JIT implementation needs a few more components other than that---most notably a facility to monitor and trace function calls for the eventual JIT compilation. Consider the OP to be one of intermediate goals, not the eventual goal.

It seems like the copy and patch approach is sort of somewhere inbetween an interpreter and a traditional JIT, and the authors of the original copy and patch paper seem to be trying to use it to replace things like the baseline compiler in the two-tier baseline/optimizing compiler strategy used for things like webassembly.

Because of this, is it really necessary to add tracing and try use a two tier interpeter/copy-and-patch JIT approach for this python JIT? Wouldn't it make more sense to try to get it to be fast enough that the JIT can be used alone?



See my other comment for details, but in short, this strategy uses a single code base for both interpreter and JIT. So any further performance improvement will benefit both without any additional work. The traditional JIT-only approach is costly to maintain in comparison.


It doesn't cache ASTs or bytecode, or invoke clang/LLVM at runtime. It copies as bytes the assembly body of the compiled functions that act as stencils, using ELF relocations as locations of where to patch in values.


My first impression is that this sounds clever but also quite fragile, because compilers love to change the minutiae how they emit their relocations between versions or compilation options.


The stencil is compiled with a separate tool and checked into the repository, so if the stencil didn't change there is no additional compile issue at all. This also means that the aforementioned tool should be able to resolve all relocations beforehand, and the exact method should be standardized in the ELF spec, so the tool only has to track the ELF spec, not the compiler. (In reality the tool would also do some cleanup jobs that are complier-dependent, of course.)


They emit the relocations into the relocation section(s), clearly marked as such. The only fragility may come from the copy-and-patch compiler not supporting all kinds of relocations that exist for a particular ABI or from the ELF itself changing but those things don't happen very often.




Wow! Thank you for your hard work. I use python for all experimental work so this would speed up my scripting work, such as processing data from API calls or filesystem. Would be good if it could speed up Flask or Django applications.

I wrote a simple toy JIT for a Javascript-like language in jitcompiler.c. It might be useful for others to learn from (I'm a beginner too!) because it's so simply written and not complicated. It's about ~2400 lines of C: frontend and backend. I do lazy patching of callsites, I haven't got anywhere near as advanced as tracing or copy-and-patching. Much of the code I wrote for this JIT was written in Python and ported to C such as register allocation, graph colouring, precolouring and "A Normal Form". The Java Virtual Machine has a template interpreter which is interesting to research.

I haven't got around to encoding amd64 x86_64 instructions as bitmasks yet, so I've hardcoded it which is another ~2000 lines of code :-)

[1]: https://github.com/samsquire/compiler see jitcompiler.c



This is awesome.

I don't want to spoil anything, but please read the PR description. You won't regret it (or your money back!).



I read it first and also wanted to post a corresponding note here. You were faster. It's really worth it. :•)

You get downvoted on Hacker News if you don't use English?



I believe this is the talk about the python JIT by Brandt Bucher.

https://youtu.be/HxSHIpEQRjs



Their "binary stencils" reminds me of Michael Franz' "Code Generation on the Fly: A Key to Portable Software". Franz generated the templates at runtime by caching parameterized code fragments from doing code gen on a version of the AST effective encoded in an lzw like way, so that each partial AST node would just have code generated once, so it didn't go as far as this but the stencil/template approach was there.


Sweet! I tried playing around with implementing a copy-and-patch style JIT in Rust before, but unfortunately between the lack of `ghccc`-style register-heavy calling convention and (still!) having no way to guarantee tailcalls, rustc doesn't make it very easy and I gave up.


xu and kjolstad's paper https://dl.acm.org/doi/pdf/10.1145/3485513 (cc-by) looks pretty worth reading; the abstract makes exciting claims

i'm skeptical of this line

> The patching step rewrites pre-determined places in the binary code, which are operands of machine instructions, including jump addresses and values of constants (stack offsets and literal values). Despite patching binary code, however, the system does not need any knowledge of platform-specific machine instruction encoding and is thus portable.

like, i think if that were possible then we wouldn't need new linker relocation types for risc-v? how are you going to patch an auipc or st instruction to have the right stack offsets and memory addresses without knowing about the weird platform-specific details like how you have to increment the auipc immediate field to compensate for the sign-extension of the associated addi or jump field in the case where its high bit is set?

two of the most influential systems using this stencil technique, as i understand it, include bellard's qemu (02005) https://www.usenix.org/legacy/event/usenix05/tech/freenix/fu... (cited in xu and kjolstad's bibliography) and massalin's synthesis (01992) https://dl.acm.org/doi/10.5555/143219 (not cited)

synthesis's quaject object system was notable for generating code at object instantiation time, so that dynamic method dispatch was implemented by branching to a subroutine at a given offset from the receiver's address, instance variables could be located in immediate operands of instructions, and the program counter served as the receiver pointer (instance variable accesses could be pc-relative). unfortunately massalin never published synthesis itself, just papers about it



> like, i think if that were possible then we wouldn't need new linker relocation types for risc-v? how are you going to patch an auipc or st instruction to have the right stack offsets and memory addresses without knowing about the weird platform-specific details like how you have to increment the auipc immediate field to compensate for the sign-extension of the associated addi or jump field in the case where its high bit is set?

That knowledge is encoded into the relocation type (e.g. R_X86_64_64) for given ABI. So the system does know about relocations, and some relocation types will be specific to a single architecture (R_RISCV_CALL_PLT in this example, I think?). But that's all you need to know about those architectures.



sure, you don't need to know the opcodes, just the encodings, but the claim in the paper seems to be stronger than that


Given only a small subset of relocations will be actually used, I think the claim itself is correct but could have been explained better. For x86-64 it only has to recognize R_X86_64_64 (or equivalents in other formats) for example.


the claim is 'the system does not need any knowledge of platform-specific machine instruction encoding', but things like incrementing the upper 20 bits iff bit 11 is set seem uncontroversially 'platform-specific machine instruction encoding'

usually, and certainly to get the performance numbers they claim, things like scrambling the immediate bits and putting 20 bits of the immediate in one instruction and 12 bits in another are also considered platform-specific machine-instruction encoding, but you could imagine a system which supports sticking arbitrary bitfields in arbitrary places



> you could imagine a system which supports sticking arbitrary bitfields in arbitrary places

If there is such an architecture, yeah the system may have to support them, or more accurately: relocations would be defined for them which have to be implemented by the system. But that's still a tiny portion of the actual instruction coding. I believe most people including authors would imagine something like an assembler or disassembler as a comparison, so pedantry aside, the claim is almost true.



no, i mean, you could support things like sparc and arm set-low/set-high instructions just by having a relocation type with arguments, say, 16, 31, and 9, meaning 'bits 16:31 of the constant should be stored in the bitfield starting at bit 9', and that almost works for risc-v auipc, lui, addi, and jr, too, even though the bit fields are split into tiny scrambled pieces in whatever position was most convenient for the chip they were working on at the time. but that means that your stencil linker has to interpret those 16, 31, and 9 numbers as it's generating the code, which is slow. but it doesn't help with the problem of risc-v's sign-extension of the 12-bit immediate in addi. also, now that i think about it, it isn't really sufficient for pc-relative addressing, because for that you need to propagate borrow bits through the scrambled bit fields

i don't think the distinction is pedantic between a compiler that needs custom code for each new machine instruction encoding and one which can support a new architecture just by giving it a data file (for some reasonable set of new architectures, anyway). the paper claims that their system is the latter, and i don't believe it



For another very recent implementation of copy-and-patch, written in Zig, see Cyber-lang: https://cyberscript.dev/0.3/index.html


PR is unreadable due to being redacted like a story with irrelevant details.


It's all explained, including a 50 minutes talk, in the linked issue: https://github.com/python/cpython/issues/113464


The commit messages are terrible. In my opinion, conventional commit messages [0] should be used for a clean commit history.

[0] https://www.conventionalcommits.org/en/v1.0.0/



Something more descriptive than "Grrr" would be nice I guess.


They are really bad and I would feel bad if I presented something like that to someone for a review (not just messages - most of those commits shouldn't exist at all). That said, the MR in question is marked as a draft, so anything goes at that point.

Also, Conventional Commits are mostly pointless. Linux-style commit message conventions are enough.



CPython seems to use squash merges, which means only one commit will end up on the main branch after merging this PR. The history on branches is irrelevant and can be completely messy, full of merges and other experiments; the main branch has one commit per actual feature/change.

And eh, conventional commits seem like pointless bureaucracy to me.



With only +1,722 lines added, even if the commits were eventually squashed upon landing, I'd consider it good etiquette to tidy up changes to maybe a handful of logical commits instead of pushing 404 raw commits.

Or maybe it's another weird pun on 404 Not Found? I can't tell by now...



The end result of doing this is good, but I find it really difficult to cleanly do this before I have something that's 100% complete.

I don't code linearly like "first I need feature A, then I code feature B which is needed for feature C, and so on"

It's usually a bit all over the place and it's not clear what depends on what until I start reaching the end.

So to do this properly I'd need to spend a day or two rewriting or making a new branch that cleanly adds everything in order. Hopefully in a way that doesn't leave master in a broken state when reverting tail commits.

In addition, when doing multiple pull requests for a single high level feature, you might get some comments about pull request "C" that would require changes in pull request "A"



How the hell is someone supposed to review your pull request if you don't take the time to clean it up?

I normally go through every single individual commit when reviewing something and find the commit messages extremely helpful to understand what some change is supposed to do.

Yes, cleaning up your commits takes some time butt I don't see an alternative if you don't work alone and want your code to stay maintainable.



I review the pull request as a whole, looking at the diff between main and the latest commit on the branch (i.e. what GitHub/etc show by default). Reading commit-by-commit means you’d read code that the author knows is wrong and had already fixed it, but you’re cluttering your mind with it. During re-reviews, I usually look at the diff between the last commit I reviewed and the newest commit.


> Reading commit-by-commit means you’d read code that the author knows is wrong and had already fixed it

If the commit is wrong, it shouldn't be there. I expect every commit in a Pull Request to be functional on its own or I am not going to approve it in the first place. Git has tools to rewrite your commit history and you should use them.

The whole point is that I should be able to revert individual commits without code breaking. At least that is the ideal. A clean version history matters a lot of the people maintaining your code down the line.



Yes and no.

I think what you say is definitely the goal for day-to-day contributions.

However, there are changes to a code base that are more "Manhattan project" in nature where not all changes can be neatly packaged into their own commits, OR the PR author kind of needs to re-do their coding on a clean room branch. Which is significant overhead.

Being able to undo a commit is a means to an end, not the ultimative goal.



> I find it really difficult to cleanly do this before I have something that's 100% complete

That's what a DVCS like git makes easy to do, it's really worth learning.



Yes, but for such a significant contribution to a huge project it's good etiquette to squash on your own before submitting the PR. (Not that it means the PR shouldn't be reviewed and accepted.)

Honestly, I frequently do this for my own personal projects since I'm lazy, but if I'm submitting something to a big open source project I always clean it up first.



Being honest though, the guy's commit messages influence my prior on how reliable and well-designed his code will be.


“My prior”. Give it a rest.


I question the value of commit messages at all. Sure, at some level you need a summary of what a change is trying to do, but we have that at 5 levels now and they are completely redundant. Generally there is a ticket in some system for tracking changes, whether it's Jira or Github itself or some other system. Then you have a PR/MR that is attached to a branch which you are trying to have merged. Then there are the commit messages themselves. These are all completely redundant to each other, and nobody in their right mind should want all of these at the same time. It's too many places to look for the same exact information, there's no reason to maintain it in more than one place.

Some truly awful standard for formatting commit messages, how to do something that has at best dubious value to begin with, is a fantastic way to give the appearance of work without the need for skill or ability or spending time trying to get useful work done, a true boon to incompetents and hangers on. It's also a great way to snipe someone's amazing work and put yourself in a position to critique them with 1/1000th of the effort of accomplishing something useful.



I value the commit messages in the context where you develop a tool that has to be run widely.

In particular, I had experience with Wine. Having useful commit messages allows you to do bisects and trace down regressions with more ease than cross-checking messages with some external ticket system, and when you have a lot of people contributing to a project it's easier to see what they're doing when they try to do a patch.

I also believe though, that it is good practice to help your colleagues when they do need to find an issue in a project where a lot of different people can work on.



This would boil down to merge level messages. Not all projects squash commits, so what you actually care about here are the mr/pr level messages, which might be approximated by the commit message on the merge commit, but actually probably will just say merge {branchname}. When you bisect with parent=0 you only see merge commits or ff directly against main branch.

Putting what im saying another way, in a project with pull requests, commit messages are redundant with the text typed into the pr and the comments on same. We should just carbon copy those onto the merge commit and forget per commit messages.



I think Jira messages are generally written from a product perspective ("here's what we want") whereas PR's are written from an implementation perspective ("here's how we did it"). And then the PR description ought to just become the squashed commit message (at least that's how my current company does it).


Code tends to live longer than projects management tools like JIRA. The version history should always be understandable from itself without access to external software. Not to mention that JIRA tickets should be based on concrete user stories while commit messages describe implementation details, they are different layers of concern. As for individual commit messages, you need to so the poor soul reviewing your MR knows what the hell you are doing.

Seriously, please think of the poor soul having to maintain you legacy code when the JIRA is long gone or the external contractor who doesn't even get access to it in the first place.



Then tooling should be set up to maintain the same information in multiple systems. I'm not saying that its not important to remember what people were trying to do, I'm saying the original theory for commit messages is completely nonexistent in modern development. People don't make single self contained commits directly to main branch. Since everyone uses some kind of pull request as the unit of merge, just keep the information on the mr and copuly it to the merge request message.






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com