[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

DEV Community

Boris Jamot โœŠ /
Boris Jamot โœŠ /

Posted on • Edited on • Originally published at mamyn0va.github.io

He Commits Vendor! ๐Ÿ˜ฑ

Yesterday we had an interesting discussion between some developers of different teams in my company. The subject was: ยซ what is the point of vendoring? ยป.

Vendoring means committing your project's dependencies along with your code.

What happened is that some developers (including myself) discovered that some other developers commit the vendor folder in the git repository. My first reaction was pretty pejorative as I thought it was a dirty practice from another time, when we had no dependency manager. The devs explained us that it has many benefits:

  • first it allows to build your app much more faster in your CI
  • then it ensures you have the exact version of your dependencies
  • then there is no way one of them get injected by some malware dependency
  • finally you are not dependent of the network (or of the remote dependency repositories) during the build

None of these arguments satisfied me, not that they're not true, but I think each of them can be solved in a cleaner way, for example by using a cache, a custom repository with audited dependencies, and by solving directly the network issues.

And you, what do you think?

Top comments (14)

Collapse
 
jamespwright profile image
James Wright

On my personal projects I don't really bother with this.
However, you only have to look at the NPM "leftpad" debacle to see why I ALWAYS do this in professional projects for a company.

I've had multiple times in my career where I need to update an old project that hasn't been touched in years, that has a dependency that is no longer available (easily) online.

Collapse
 
weswedding profile image
Weston Wedding

Yeah, at the very least, I feel like making sure you commit the dependencies for major versions of your final product is important. I don't feel like we should assume composer.json or package.json will even be enough 5 years from now. Online services come and go at a moment's notice.

Collapse
 
biros profile image
Boris Jamot โœŠ /

It seems to be a good reason.
Thx !

Collapse
 
yo profile image
Yoginth

This will increase the size of the repo!

Collapse
 
itsdarrylnorris profile image
Darryl Norris

This could potentially become a very big problem depends on how your app is structured. For instance, if you have a monolithic app that has, it has decoupled components (modules or whatever you want to call it), and each of them has theirs on vendor directory, this will make your app huge.

This brings problems with IDE indexing taking forever, and even downloading the repo.

Collapse
 
yo profile image
Yoginth

Yeah exactly of course it will take too much time to index in my IDEs.

Collapse
 
cess11 profile image
PNS11

It will have to be indexed and downloaded regardless of whether it's from the repo or some web storage.

Try ripgrep or fzf, they're pretty great at fast searching.

Collapse
 
bgadrian profile image
Adrian B.G.

Maybe this way devs will realize how much code they really have in their app.

Probably dead code, untested code and so on. Package managers make things so easy to throw away performance, lower level concerns and build sizes.

Collapse
 
rhymes profile image
rhymes • Edited
  • first it allows to build your app much more faster in your CI

cache as you said

  • then it ensures you have the exact version of your dependencies

repeatable builds and exact versioning let you do that

  • then there is no way one of them get injected by some malware dependency

what if it's already in there? It's not like you're going to audit the code of every single dependency (and their dependencies) you add but you can still use the cache for that

  • finally you are not dependent of the network (or of the remote dependency repositories) during the build

proxy or cache as you said

None of these arguments satisfied me, not that they're not true, but I think each of them can be solved in a cleaner way, for example by using a cache, a custom repository with audited dependencies, and by solving directly the network issues.

Yep :-)

It's not a bad thing to do, it's just not really needed and you end up putting your dependencies (and their dependencies) as a diff in the git log everytime you upgrade anything

Collapse
 
bgadrian profile image
Adrian B.G.

It solves some rare problems, but real never the less. I would use it if team members have low bandwidth, or for projects that are not in active development but important.

I imagine if the source is a problem could be saved in a git submodule or something.

Collapse
 
sanzeeb3_18 profile image
Sanzeeb Aryal

I think I'll need to commit the vendor directory. The vendor directory is required when deploying and I use GH actions to deploy. The vendor directory isn't in the GH, because it's gitignored. So, I obviously need to git the vendor directory? Do you have any suggestions on that? For context, here's the issue: github.com/sanzeeb3/wp-frontend-de...

I have never committed vendor yet, but I'll need to?

Collapse
 
belinde profile image
Franco Traversaro • Edited

We just commit the composer.lock or the equivalent for npm project, so anyone can install exactly the right version of the dependencies. Problem solved.

 
biros profile image
Boris Jamot โœŠ /

This has absolutely no impact on indexing in IDE. In both cases the same files are indexed. No overhead.

Collapse
 
incubus8 profile image
โ˜ Green Panda

Try to build and deploy in air gap. oops....