Replies: 0 comments 41 replies
-
Many of these scores are quite opinionated tho, and are not universally agreed upon, nor are the priorities of each criteria universally agreed upon. It seems like a very dangerous path to me for npm to "bless" existing scores, and thus implicitly bless their priorities and criteria. |
Beta Was this translation helpful? Give feedback.
-
One thing that's worth clarifying:
We actually don't use npms.io - at present, the scores in search results use a mechanism that was inspired by, but is not identical to, npms.io. The scores you see in npm's search results will differ from npms.io. We may want to move closer to npms.io's algorithm in the future, but I don't have any concrete plans to do that yet. |
Beta Was this translation helpful? Give feedback.
-
@ethomson Thank you for your sincere response! :) If I understand the situation correctly:
I have to be careful what I say, because at the end of the day npm is a fine service that has had a profound impact on software development, and arguably humankind. 😃 All four issues above may soon be resolved. But I think we have a core problem here. Whatever scoring is being used, I think that it is failing for ns-flip, and by extension for everyone. @bnb proposed using other rating systems in some way, given that their priorities have separate merits. I claim more fundamentally that your current scoring code simply has bugs. I claim that ns-flip reveals that. Please, Edward, do me a kindness and look for yourself at the activity on GitHub for ns-flip. Most simply, please just check out the maintenance. You are giving me a maintenance of 33, which is insane for such an active project. Nothing that I do changes it... it has actually declined. I claim something is broken in your collection or your analysis. Then please look at the history of our quality score. I will be the first to say that ns-flip has its quality issues. Our testing coverage is currently awful. But how do you explain that until last Thursday you ranked us at 92, and then in one day our quality nosedived to 62? Especially when you understand what changed in the release. We had implemented the suggested community practices of GitHub! I literally can run searches with our own keywords and I still cannot find us. I realize that npm is free, and you have a right to say to me "So sue me!" But for all the great strides for node here, npm is becoming a winner-takes-all platform. There are packages getting more than 50 million downloads per week, but surveying the rest of the 1.45 million packages suggests that most of npm is a graveyard for fizzled projects. A lot of them are junk to be sure, but some are passion projects. Those probably fizzled because nobody knew about them. Sadly, all of the documentation, tutorials, and maintenance in the world is wasted time if nobody sees a package. If you want to empower promising new packages and tools to survive, the algorithm needs to be very fair and transparent for maintenance and quality. After all, those we can change. We certainly can't do anything about our "popularity" as newcomers. |
Beta Was this translation helpful? Give feedback.
-
I agree that the score system is confusing right now. I increased the quality score by updating dependencies in my package (happy-dom) from about 50% to 93% at npms.io, but at npmjs.com it has remained the same. Maintanance is 100% at npms.io, but only 33% at npmjs.com, even though I do new releases on a weekly basis. It would be great with a page explaining how it works and how I can improve my package. However, I suspect that there is some bug in the search at npmjs.com. |
Beta Was this translation helpful? Give feedback.
-
this thread is years old and has made no movement - devs stated the goal is to move closer to for example, for one of my packages (scores are
this is 2x difference and nothing i do makes a difference |
Beta Was this translation helpful? Give feedback.
-
I'm mentioning a related discussion I created that explores a way to make npm search better. The main idea is to give fewer points for popularity and more points for consistency of commits, releases, and issue responses. The longer the period of consistency, the better. Discussion — https://github.com/npm/feedback/discussions/718 |
Beta Was this translation helpful? Give feedback.
-
Can confirm that NPM's metrics plain suck. What NPM does with their search metrics just sucks the fun out of creating and publishing packages - why should I strive for quality when populism is the only thing that matters, it seems? |
Beta Was this translation helpful? Give feedback.
-
I do not understand the searches metrics so much since I have ot done a deep dive in it. I wish not to comment on that. But here is a question I am raising on the basic download stats:
This seems to be a gross actual download stats issue. All other stats of downloads and stats based on downloads depends on this calculation. I do not undertand why this was not noticed. I wonder why? |
Beta Was this translation helpful? Give feedback.
-
I too would ask that more clarity be provided for how to achieve a 100% in quality and maintenance metrics. As others have stated, these are things we should be able to control, and if they actually represent indicators of quality, shouldn't we encourage everyone to specifically adhere to those? I'm in a similar boat with my itty-router library. It's reasonably popular, and had > 99% on quality with npms.io (back when they were updating it). But on NPMJS? 63% quality, and nothing I do seems to shake it. I have far more quality controls in place than many libraries - yet somehow a brand new, virtually blank repo can achieve a > 80% quality score. Quality Score on npms.ioQuality score on NPM |
Beta Was this translation helpful? Give feedback.
-
historically, we've seen a lot of "package score" implementations. To name a few,
Over the years, there's been a bunch of attempts to quantify the quality of an npm package. I can't imagine we're ever going to see folks stop building tools that do this, since the goalposts will always be moving.
Instead of building out a first-party competitor to these tools, I'd like to propose that npm aggregate the scores and surface them to end-users. There's a variety of ways such a mechanism could be accomplished, so I don't want to be prescriptive, but the collective input would both allow folks to have their own platform without having to deal with all the basic metrics (rewriting these over and over and over again), enabling more focused and niche scoring mechanisms to arise.
I've put this in General since I'm not sure what the ideal interface for this would be - registry, website, cli, independent API, or all of the above. Open to any implementation that could be easily accessed by users.
Beta Was this translation helpful? Give feedback.
All reactions