Hi,
I’ve just uploaded a new build of xmonad to make sure the build-dependencies match the ones in the archive.
I’m wondering if this strictness (setting the build-dependencies on haskell modules exactly to the version currently installed) is really needed. The policy only mentions it is used to keep the architectures in sync, but I don’t really understand the problem this is fixing. I’m also wondering if we can’t keep the archives in sync using binNMUs.
I would think it saves us some work this way. It’s also nicer to our users who try to re-build a certain package on their local machine, without having the very exact build-dependencies installed.
I’m probably missing some point here, but I’d like to learn about it.
Thanks, Joachim
Hi Joachim,
On Thu, Mar 06, 2008 at 08:26:16PM +0100, Joachim Breitner wrote:
I’m wondering if this strictness (setting the build-dependencies on haskell modules exactly to the version currently installed) is really needed. The policy only mentions it is used to keep the architectures in sync, but I don’t really understand the problem this is fixing.
(on the assumption that you agree that the binary packages should have strict deps:)
The problem is:
* X depends on Y. * You upload new versions of X and Y together. * X gets built against the old version of Y on, say, hppa. * Y gets built on hppa.
Now we have to get a new hppa X into the archive. The ways we have to do this are:
* Pester hppa people or buildd people to get a bin NMU done; highly inefficient for this to be standard practice. * Do an hppa build ourselves on the developer machine; again highly inefficient, and also there are times when there is no machine available. * Do another source upload of X. * Don't upload X and Y together, but stagger them so that we don't upload X until Y has built everywhere. This isn't practical when it's not at all uncommon for Y to take weeks to be built everywhere.
I’m also wondering if we can’t keep the archives in sync using binNMUs.
There's no technical reason why we can't, it's just a lot more effort.
I would think it saves us some work this way.
Why's that?
It’s also nicer to our users who try to re-build a certain package on their local machine, without having the very exact build-dependencies installed.
That is true. I suppose we could use >= current rather than == current for the build-deps.
This would be essentially the same from the point of view of the archive, though, in that the buildds will still need to build things in the same order.
We'd need to generate the strict binary-deps on the Cabal package at build time rather than in "debian/rules update-generated-files".
The alternative would be to leave things as they are, and for users to run "debian/rules update-generated-files" before trying to locally build the package. They probably ought to do this anyway, especially if they're building for a different release than the package was designed for.
Thanks Ian
Hi,
Am Sonntag, den 09.03.2008, 17:52 +0000 schrieb Ian Lynagh:
Hi Joachim,
On Thu, Mar 06, 2008 at 08:26:16PM +0100, Joachim Breitner wrote:
I’m wondering if this strictness (setting the build-dependencies on haskell modules exactly to the version currently installed) is really needed. The policy only mentions it is used to keep the architectures in sync, but I don’t really understand the problem this is fixing.
(on the assumption that you agree that the binary packages should have strict deps:)
Right, I agree with that.
- Pester hppa people or buildd people to get a bin NMU done; highly inefficient for this to be standard practice.
I wonder if the right fix would be to make this more efficient. AFAIK, it’s just a command they run or flag they set to make the buildd re-build it. We could even semi-automatically have a program that tells us what packages need to be binNMUed.
I’ll ask the buildd people if this would be acceptable, of if it can be made acceptable (by more automation).
Maybe the release team would favour this instead of tight build-dependencies, because it might make stuff easier for them or security or something like that.
- Do an hppa build ourselves on the developer machine; again highly inefficient, and also there are times when there is no machine available.
- Do another source upload of X.
- Don't upload X and Y together, but stagger them so that we don't upload X until Y has built everywhere. This isn't practical when it's not at all uncommon for Y to take weeks to be built everywhere.
I agree that these three are worse than what we have now.
I would think it saves us some work this way.
Why's that?
I would like to fix a bug in xmonad, but when I build xmonad, I have to manually upload a new xmonad-contrib packages (involving changelog bumps, re-generating control, building, uploading – all routine, but still troulbe and a time waster). If we could (cheaply, or even automatically) do binNMUs, this wouldn’t be such a big deal.
It’s also nicer to our users who try to re-build a certain package on their local machine, without having the very exact build-dependencies installed.
That is true. I suppose we could use >= current rather than == current for the build-deps.
This would be essentially the same from the point of view of the archive, though, in that the buildds will still need to build things in the same order.
Right, might be a good idea if we don’t go the binNMU route.
We'd need to generate the strict binary-deps on the Cabal package at build time rather than in "debian/rules update-generated-files".
Yes, but that’d be possible? After all, it’s similar to what dpkg-shlibdebs does, right?
Thanks, Joachim
On Sun, Mar 09, 2008 at 06:42:55PM +0000, Joachim Breitner wrote:
I would think it saves us some work this way.
Why's that?
I would like to fix a bug in xmonad, but when I build xmonad, I have to manually upload a new xmonad-contrib packages (involving changelog bumps, re-generating control, building, uploading – all routine, but still troulbe and a time waster). If we could (cheaply, or even automatically) do binNMUs, this wouldn’t be such a big deal.
Right, so which is better depends on whether it is easier to do 1 source upload, or to trigger a binNMU on every arch.
In some cases you'll want to rebuild some or all of the dependent packages for testing, of course.
We'd need to generate the strict binary-deps on the Cabal package at build time rather than in "debian/rules update-generated-files".
Yes, but that’d be possible? After all, it’s similar to what dpkg-shlibdebs does, right?
Yes, it's certainly possible.
Thanks Ian
debian-haskell@lists.urchin.earth.li