All Case Studies Design Development Interviews Machine Learning Project Management

Git Merge 2015 Reviewed

Git Merge is a wrap! The project is already 10 years old, folks, and I was right there in Paris to celebrate. Here's my review of this "birthday party".

Git Merge 2015 --continue

Git Merge is a wrap! At least that's what the second to final slide said, followed by an explanation as to how we could reach the Git birthday party location. “Birthday party?” you ask? Yes. That's because the Git project is already 10 years old, folks! And I was right there in Paris to celebrate.


Despite all the jokes about how Git still isn’t eligible to drink beer - oops, wine (sorry, France) - it turned out to be one hell of a party. I'm not much of a wine expert myself, but there are two things which I love about Paris: a) cheese and b) the fact that it is right here in lovely Europe.


The only thing I didn't like about la Gaîté lyrique where the 2015 conference was held.

Elephant in the repo

So-called large file storage is what received most attention at this year’s conference, at least from my perspective. GitHub announced git-lfs mere hours before the event, so that Rick Olson was asking his colleagues, while he stood onstage, if it was OK to talk to us about it yet. Maybe they would have kept this sauce secret if not for our conference.


( from GitHub

The basic idea is to give the user a way of deciding which files should be considered large, for example, .mp3 files exceeding the limit of 100MB, and keep them on a separate file server instead of the repo - where we would now store only files containing meta information about remote giants. Things like links, MD5 hashes, and everything else which could be considered light. Pretty neat, if you ask me.


( from Atlassian (who was also giving away badass-looking sunglasses, thanks!)

John Garcia from Atlassian also did a talk focused on big, monolithic repositories. Current ideas include Local Object Retention (the slide in the photo above is pretty self-explanatory) and storing large files in a way similar to that described above. I’m eager to see how those ideas are implemented in the Git project itself! Here's the whole presentation by John:

By the way, it is really amazing to see how both Atlassian and GitHub, one would say, obvious competitors, can work together to bring better solutions to the entire open source community. Whichever hosted-VCS solution you like better, there’s one thing you have to admit - both companies have class.

File locking

It’s what naturally follows from the “big files” issue: you don't want to edit a big PSD file, push it, and find out that your colleague committed it first and now you've wasted your time and network transfer on basically achieving nothing. Thus, this issue also has to be addressed with the proper care. Here we go...

It is generally agreed that file locking works in the following way:

  • you lock a file and it is marked as a read-only in other systems - implementation may of course differ between certain OSs and/or filesystems

  • locks are synced

  • if your friend needs to access a file locked by you, he/she can: upon receiving the appropriate notification; get in touch with you and be human, discussing with you what you are both going to do about it; or… break the lock.


No worries, locked files aren’t locked forever.

Breaking the lock is also possible: in very specific scenarios, you can unlock the file using the --force option or similar - might be very useful when you've locked a file and then left for vacation without releasing it. Like almost everywhere else in the Unix world, people are free to do dangerous things as long as they know how dangerous they are.

UNIX was not designed to stop its users from doing stupid things, as that would also stop them from doing clever things. -- unknown

Now, keep in mind that this represents my relatively limited understanding of what the guys and gals there were talking about, so take it with a grain of salt.

Hey, we do that!

Amazon, or the personification of Amazon in the form of Danilo Poccia, gave a talk about managing your infrastructure with Git. Maybe we don't do it in the exact same way - we use less Amazon servers then Amazon, for sure - but we do use Git to handle things like user credentials, access to repos, and server management in the form of chef recipes, so I was sitting there and nodding in comprehension.

For example: here’s our open source Access app which we use to manage users and permissions on GitHub and Google Groups. It’s pretty simple but get’s the job done for us.

Leaflet propaganda

An Automated Git Branching Flow is one extra thing I encountered and I love it. Given out by some guy who's name I can't recall (Sorry, guy! You have my thanks for this!), it shows how the well-known Git workflow by Vincent Driessen can be performed with a heavy use of continuous integration (namely, JenkinsCI), and if you already use the basic flow, I encourage you to become familiar with the extended one.


That’s how we roll, err... deploy

The only constant is change

Actually, that’s the mantra our CEOs tell us over and over again. So much so that we even started to cheerfully mock them, which, of course, backfired for us in the form of the company’s t-shirts with the very same sentence printed all over them… but enough about Netguru.

I used this saying here because it very nicely describes how things are in the Git community. Right now, there might not be too many new and shiny features on the way, but that’s because developers are focusing on building a quality release - fixing existing bugs, and improving speed and ease of work. But don’t worry, folks! Big things are just around the corner, like native support for larger files.

I heard some developers saying, ”well, you just shouldn’t store big files in the Git repo” - but that’s not how the Git community sees it. They recognise that people use their tool for certain tasks and they want to allow people to do more of them. It’s not a physical tool but rather a creation of source code which can be shaped in as many directions as needed (think Linux kernel, for example), and they don’t want to bind people to some specific set of rules.

They would rather sit and work on features which allow for use of the tool - the Git - in all the ways imagined by other developers. That’s where its beauty lies.

And here comes the ending. Those were the presentations which I liked the most, but I sincerely encourage you to pay a visit next year and check out the rest of them for yourself: some other talks included one on how to teach people Git, another on how Twitter scales Git, and even one about how the team of developers at Microsoft managed to fight their way into integrating Git in the Team Foundation products. I’m sure you would find something which interests you!

To sum up, everything went great, without any conflicts whatsoever, and we really got merged into the atmosphere of Paris (pun intended). Big thanks to everyone who was there and I hope to see you on the next release of the Git Merge conference!

One last thing: if you are interested in what happened behind the closed doors of the contributors summit, you can read some notes here.

Also, I got the official t-shirt, and therefore I am now clearly an expert, so just you wait for more awesome git-related posts from me.

Are you interested in other events for devs happening this year? Check out our lists of top 9 mobile events in Europe or top 15 Ruby and Rails global events.

digital transformation
remote work best practices
Read also
Need a successful project?
Estimate project or contact us