[PATCH] rootfs: Make rootfs_postprocess_finalize the last step

72 views
Skip to first unread message

Vijai Kumar K

unread,
Feb 6, 2020, 9:07:44 AM2/6/20
to isar-...@googlegroups.com, Vijai Kumar K
Sometimes the additional postprocessing functions we add as
part our custom image needs a proper chroot environment.

Implicitly make rootfs_postprocess_finalize as the last step
to be executed in rootfs_postprocess task.

Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
---
meta/classes/rootfs.bbclass | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meta/classes/rootfs.bbclass b/meta/classes/rootfs.bbclass
index 64eaef7..b0394d5 100644
--- a/meta/classes/rootfs.bbclass
+++ b/meta/classes/rootfs.bbclass
@@ -197,7 +197,7 @@ rootfs_generate_manifest () {
${ROOTFS_MANIFEST_DEPLOY_DIR}/"${PF}".manifest
}

-ROOTFS_POSTPROCESS_COMMAND += "${@bb.utils.contains('ROOTFS_FEATURES', 'finalize-rootfs', 'rootfs_postprocess_finalize', '', d)}"
+ROOTFS_POSTPROCESS_COMMAND_append = "${@bb.utils.contains('ROOTFS_FEATURES', 'finalize-rootfs', ' rootfs_postprocess_finalize', '', d)}"
rootfs_postprocess_finalize() {
sudo -s <<'EOSUDO'
test -e "${ROOTFSDIR}/chroot-setup.sh" && \
--
2.17.1

Jan Kiszka

unread,
Feb 6, 2020, 12:21:22 PM2/6/20
to Vijai Kumar K, isar-...@googlegroups.com, Vijai Kumar K
On 06.02.20 15:06, Vijai Kumar K wrote:
> Sometimes the additional postprocessing functions we add as
> part our custom image needs a proper chroot environment.

When exactly?

>
> Implicitly make rootfs_postprocess_finalize as the last step
> to be executed in rootfs_postprocess task.
>

Well, that relies on no one else using _append to add things. Otherwise,
the race is open again...

Jan

> Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
> ---
> meta/classes/rootfs.bbclass | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/meta/classes/rootfs.bbclass b/meta/classes/rootfs.bbclass
> index 64eaef7..b0394d5 100644
> --- a/meta/classes/rootfs.bbclass
> +++ b/meta/classes/rootfs.bbclass
> @@ -197,7 +197,7 @@ rootfs_generate_manifest () {
> ${ROOTFS_MANIFEST_DEPLOY_DIR}/"${PF}".manifest
> }
>
> -ROOTFS_POSTPROCESS_COMMAND += "${@bb.utils.contains('ROOTFS_FEATURES', 'finalize-rootfs', 'rootfs_postprocess_finalize', '', d)}"
> +ROOTFS_POSTPROCESS_COMMAND_append = "${@bb.utils.contains('ROOTFS_FEATURES', 'finalize-rootfs', ' rootfs_postprocess_finalize', '', d)}"
> rootfs_postprocess_finalize() {
> sudo -s <<'EOSUDO'
> test -e "${ROOTFSDIR}/chroot-setup.sh" && \
>

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

vijai kumar

unread,
Feb 6, 2020, 12:47:46 PM2/6/20
to isar-users


On Thursday, February 6, 2020 at 10:51:22 PM UTC+5:30, Jan Kiszka wrote:
On 06.02.20 15:06, Vijai Kumar K wrote:
> Sometimes the additional postprocessing functions we add as
> part our custom image needs a proper chroot environment.

When exactly?


Though not finalized, the base-apt source gathering which I proposed to do via rootfs postprocess.
That is the only one right now. But I believe more similar might come. We already
have some post-process in our QA layer to pull out the dpkg status file for processing.
But that doesn't need chroot.

 

>
> Implicitly make rootfs_postprocess_finalize as the last step
> to be executed in rootfs_postprocess task.
>

Well, that relies on no one else using _append to add things. Otherwise,
the race is open again...


Yes. Also to note, there was this proposal from Baurzhan[1] to remove finalize from rootfs features.
We could do something similar if no one actually uses that feature explicitly. But, though not tested,
I believe that might break buildchroot, and we might need to take care of it in buildchroot's post-process.
If everyone agrees then we could take that path. That should be cleaner and should avoid these kinds of
easy to make errs. 


Thanks,
Vijai Kumar K

Jan Kiszka

unread,
Feb 6, 2020, 1:09:20 PM2/6/20
to vijai kumar, isar-users
On 06.02.20 18:47, vijai kumar wrote:
>
>
> On Thursday, February 6, 2020 at 10:51:22 PM UTC+5:30, Jan Kiszka wrote:
>
> On 06.02.20 15:06, Vijai Kumar K wrote:
> > Sometimes the additional postprocessing functions we add as
> > part our custom image needs a proper chroot environment.
>
> When exactly?
>
>
>
> Though not finalized, the base-apt source gathering which I proposed to
> do via rootfs postprocess.
> That is the only one right now. But I believe more similar might come.
> We already
> have some post-process in our QA layer to pull out the dpkg status file
> for processing.
> But that doesn't need chroot.
>

Absolutely fine, just make sure to describe use cases when arguing about
the "why" of a commit (which is what the commit log is about).

>  
>
>
> >
> > Implicitly make rootfs_postprocess_finalize as the last step
> > to be executed in rootfs_postprocess task.
> >
>
> Well, that relies on no one else using _append to add things.
> Otherwise,
> the race is open again...
>
>
> Yes. Also to note, there was this proposal from Baurzhan[1] to remove
> finalize from rootfs features.
> We could do something similar if no one actually uses that feature
> explicitly. But, though not tested,
> I believe that might break buildchroot, and we might need to take care
> of it in buildchroot's post-process.
> If everyone agrees then we could take that path. That should be cleaner
> and should avoid these kinds of
> easy to make errs. 
>
> [1] https://groups.google.com/d/msg/isar-users/_RLBzyvvZvM/WuYpLPVBAQAJ
>

Second voice. Seems like we should do it then, model the finalization
without ROOTFS_POSTPROCESS_COMMAND.

Jan

vijai kumar

unread,
Feb 6, 2020, 1:28:36 PM2/6/20
to isar-users


On Thursday, February 6, 2020 at 11:39:20 PM UTC+5:30, Jan Kiszka wrote:
On 06.02.20 18:47, vijai kumar wrote:
>
>
> On Thursday, February 6, 2020 at 10:51:22 PM UTC+5:30, Jan Kiszka wrote:
>
>     On 06.02.20 15:06, Vijai Kumar K wrote:
>     > Sometimes the additional postprocessing functions we add as
>     > part our custom image needs a proper chroot environment.
>
>     When exactly?
>
>
>
> Though not finalized, the base-apt source gathering which I proposed to
> do via rootfs postprocess.
> That is the only one right now. But I believe more similar might come.
> We already
> have some post-process in our QA layer to pull out the dpkg status file
> for processing.
> But that doesn't need chroot.
>

Absolutely fine, just make sure to describe use cases when arguing about
the "why" of a commit (which is what the commit log is about).

Sure. I agree that the 'Sometimes' is pretty vague.  Sorry, Will take care of that.

>  
>
>
>     >
>     > Implicitly make rootfs_postprocess_finalize as the last step
>     > to be executed in rootfs_postprocess task.
>     >
>
>     Well, that relies on no one else using _append to add things.
>     Otherwise,
>     the race is open again...
>
>
> Yes. Also to note, there was this proposal from Baurzhan[1] to remove
> finalize from rootfs features.
> We could do something similar if no one actually uses that feature
> explicitly. But, though not tested,
> I believe that might break buildchroot, and we might need to take care
> of it in buildchroot's post-process.
> If everyone agrees then we could take that path. That should be cleaner
> and should avoid these kinds of
> easy to make errs. 
>
> [1] https://groups.google.com/d/msg/isar-users/_RLBzyvvZvM/WuYpLPVBAQAJ
>

Second voice. Seems like we should do it then, model the finalization
without ROOTFS_POSTPROCESS_COMMAND.


Sure. I will start working on that.

Thanks,
Vijai Kumar K

Vijai Kumar K

unread,
Feb 10, 2020, 12:38:02 AM2/10/20
to isar-...@googlegroups.com, Vijai Kumar K
With the current implementation it is difficult to append a
postprocess function which requires a chroot environment.
For example, to add a postprocess function which runs apt-get to
download all source of packages installed in the target.

rootfs_postprocess_finalize is not actually an optional feature
but instead a necessary cleanup function for image class.
So, move the implementation to image class and make it as a task.

Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
---

Changes in v2:
- The solution is changed to remove rootfs_finalize from
ROOTFS_POSTPROCESS_COMMAND.

meta/classes/image.bbclass | 41 ++++++++++++++++++++++++++++++++++++-
meta/classes/rootfs.bbclass | 39 -----------------------------------
2 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/meta/classes/image.bbclass b/meta/classes/image.bbclass
index 725bc04..98338ac 100644
--- a/meta/classes/image.bbclass
+++ b/meta/classes/image.bbclass
@@ -60,7 +60,7 @@ image_do_mounts() {
}

ROOTFSDIR = "${IMAGE_ROOTFS}"
-ROOTFS_FEATURES += "copy-package-cache clean-package-cache finalize-rootfs generate-manifest"
+ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest"
ROOTFS_PACKAGES += "${IMAGE_PREINSTALL} ${IMAGE_INSTALL}"
ROOTFS_MANIFEST_DEPLOY_DIR ?= "${DEPLOY_DIR_IMAGE}"

@@ -169,5 +169,44 @@ python do_deploy() {
}
addtask deploy before do_build after do_image

+do_rootfs_finalize() {
+ sudo -s <<'EOSUDO'
+ test -e "${ROOTFSDIR}/chroot-setup.sh" && \
+ "${ROOTFSDIR}/chroot-setup.sh" "cleanup" "${ROOTFSDIR}"
+ rm -f "${ROOTFSDIR}/chroot-setup.sh"
+
+ test ! -e "${ROOTFSDIR}/usr/share/doc/qemu-user-static" && \
+ find "${ROOTFSDIR}/usr/bin" \
+ -maxdepth 1 -name 'qemu-*-static' -type f -delete
+
+ mountpoint -q '${ROOTFSDIR}/isar-apt' && \
+ umount -l ${ROOTFSDIR}/isar-apt
+ rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/isar-apt
+
+ mountpoint -q '${ROOTFSDIR}/base-apt' && \
+ umount -l ${ROOTFSDIR}/base-apt
+ rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/base-apt
+
+ mountpoint -q '${ROOTFSDIR}/dev' && \
+ umount -l ${ROOTFSDIR}/dev
+ mountpoint -q '${ROOTFSDIR}/sys' && \
+ umount -l ${ROOTFSDIR}/proc
+ mountpoint -q '${ROOTFSDIR}/sys' && \
+ umount -l ${ROOTFSDIR}/sys
+
+ rm -f "${ROOTFSDIR}/etc/apt/apt.conf.d/55isar-fallback.conf"
+
+ rm -f "${ROOTFSDIR}/etc/apt/sources.list.d/isar-apt.list"
+ rm -f "${ROOTFSDIR}/etc/apt/preferences.d/isar-apt"
+ rm -f "${ROOTFSDIR}/etc/apt/sources.list.d/base-apt.list"
+
+ mv "${ROOTFSDIR}/etc/apt/sources-list" \
+ "${ROOTFSDIR}/etc/apt/sources.list.d/bootstrap.list"
+
+ rm -f "${ROOTFSDIR}/etc/apt/sources-list"
+EOSUDO
+}
+addtask rootfs_finalize before do_rootfs after do_rootfs_postprocess
+
# Last so that the image type can overwrite tasks if needed
inherit ${IMAGE_TYPE}
diff --git a/meta/classes/rootfs.bbclass b/meta/classes/rootfs.bbclass
index 64eaef7..153038a 100644
--- a/meta/classes/rootfs.bbclass
+++ b/meta/classes/rootfs.bbclass
@@ -197,45 +197,6 @@ rootfs_generate_manifest () {
${ROOTFS_MANIFEST_DEPLOY_DIR}/"${PF}".manifest
}

-ROOTFS_POSTPROCESS_COMMAND += "${@bb.utils.contains('ROOTFS_FEATURES', 'finalize-rootfs', 'rootfs_postprocess_finalize', '', d)}"
-rootfs_postprocess_finalize() {
- sudo -s <<'EOSUDO'
- test -e "${ROOTFSDIR}/chroot-setup.sh" && \
- "${ROOTFSDIR}/chroot-setup.sh" "cleanup" "${ROOTFSDIR}"
- rm -f "${ROOTFSDIR}/chroot-setup.sh"
-
- test ! -e "${ROOTFSDIR}/usr/share/doc/qemu-user-static" && \
- find "${ROOTFSDIR}/usr/bin" \
- -maxdepth 1 -name 'qemu-*-static' -type f -delete
-
- mountpoint -q '${ROOTFSDIR}/isar-apt' && \
- umount -l ${ROOTFSDIR}/isar-apt
- rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/isar-apt
-
- mountpoint -q '${ROOTFSDIR}/base-apt' && \
- umount -l ${ROOTFSDIR}/base-apt
- rmdir --ignore-fail-on-non-empty ${ROOTFSDIR}/base-apt
-
- mountpoint -q '${ROOTFSDIR}/dev' && \
- umount -l ${ROOTFSDIR}/dev
- mountpoint -q '${ROOTFSDIR}/sys' && \
- umount -l ${ROOTFSDIR}/proc
- mountpoint -q '${ROOTFSDIR}/sys' && \
- umount -l ${ROOTFSDIR}/sys
-
- rm -f "${ROOTFSDIR}/etc/apt/apt.conf.d/55isar-fallback.conf"
-
- rm -f "${ROOTFSDIR}/etc/apt/sources.list.d/isar-apt.list"
- rm -f "${ROOTFSDIR}/etc/apt/preferences.d/isar-apt"
- rm -f "${ROOTFSDIR}/etc/apt/sources.list.d/base-apt.list"
-
- mv "${ROOTFSDIR}/etc/apt/sources-list" \
- "${ROOTFSDIR}/etc/apt/sources.list.d/bootstrap.list"
-
- rm -f "${ROOTFSDIR}/etc/apt/sources-list"
-EOSUDO
-}
-
do_rootfs_postprocess[vardeps] = "${ROOTFS_POSTPROCESS_COMMAND}"
python do_rootfs_postprocess() {
# Take care that its correctly mounted:
--
2.17.1

Henning Schild

unread,
Feb 11, 2020, 6:38:43 AM2/11/20
to Vijai Kumar K, isar-...@googlegroups.com, Vijai Kumar K
This patch will allow you to keep your apt downloading feature
downstream for yourself. I would say - propose it again together with
the feature.

In fact if the feature was in Isar the whole problem would go away,
unless you have more postprocess functions.

We discussed postprocessing a couple of times, it is really bad style
and enabling it as a feature that is easy to use we provoke downstream
layers making mistakes by implementing their stuff as such postprocess
functions.

Henning

vijai kumar

unread,
Feb 11, 2020, 9:14:25 AM2/11/20
to isar-users


On Tuesday, February 11, 2020 at 5:08:43 PM UTC+5:30, Henning Schild wrote:
This patch will allow you to keep your apt downloading feature
downstream for yourself. I would say - propose it again together with
the feature.

I can push that feature if it is needed upstream, which I doubt, since I
don't vision an upstream use-case where-in one would need to
download all the sources.

Also, It would need your base-apt series.
 

In fact if the feature was in Isar the whole problem would go away,
unless you have more postprocess functions.

Yes. We have one per se for our QA layer. To export dpkg status file to
deploy directory. This will be used by debsecan.


We discussed postprocessing a couple of times, it is really bad style
and enabling it as a feature that is easy to use we provoke downstream
layers making mistakes by implementing their stuff as such postprocess
functions.

I see the ability to add custom post-processing as a useful feature.
Not sure if anyone actually uses them in their downstream layers. It is
good to have if you know what you are doing.

As long as this provision is there, people would use it. If we feel that this
provision is unnecessary and would lead to issues, well, we could go
ahead and remove it.

Thanks,
Vijai Kumar K

Henning Schild

unread,
Feb 11, 2020, 10:20:43 AM2/11/20
to vijai kumar, isar-users
On Tue, 11 Feb 2020 06:14:25 -0800
vijai kumar <vijaikumar....@gmail.com> wrote:

> On Tuesday, February 11, 2020 at 5:08:43 PM UTC+5:30, Henning Schild
> wrote:
> >
> > This patch will allow you to keep your apt downloading feature
> > downstream for yourself. I would say - propose it again together
> > with the feature.
> >
>
> I can push that feature if it is needed upstream, which I doubt,
> since I don't vision an upstream use-case where-in one would need to
> download all the sources.

The motivation is exactly yours, building a serious product, with an
eye on possibly maintaining packages longer than upstream and OSS
license clearing.

> Also, It would need your base-apt series.

Even better, that needs attention ;).

> >
> > In fact if the feature was in Isar the whole problem would go away,
> > unless you have more postprocess functions.
> >
>
> Yes. We have one per se for our QA layer. To export dpkg status file
> to deploy directory. This will be used by debsecan.
>
>
> > We discussed postprocessing a couple of times, it is really bad
> > style and enabling it as a feature that is easy to use we provoke
> > downstream layers making mistakes by implementing their stuff as
> > such postprocess functions.
> >
>
> I see the ability to add custom post-processing as a useful feature.
> Not sure if anyone actually uses them in their downstream layers. It
> is good to have if you know what you are doing.

I have seen some downstream layers where clearly not everyone knows
what they are doing.

> As long as this provision is there, people would use it. If we feel
> that this
> provision is unnecessary and would lead to issues, well, we could go
> ahead and remove it.

Removing it later on would break downstream layers. So it is easy and
risky to add it, and hard to remove it again.

Henning

> Thanks,
> Vijai Kumar K
>
>
> Henning
> >
> > On Mon, 10 Feb 2020 11:07:53 +0530
> > Vijai Kumar K <vijaikumar...@gmail.com <javascript:>> wrote:
> >
> > > With the current implementation it is difficult to append a
> > > postprocess function which requires a chroot environment.
> > > For example, to add a postprocess function which runs apt-get to
> > > download all source of packages installed in the target.
> > >
> > > rootfs_postprocess_finalize is not actually an optional feature
> > > but instead a necessary cleanup function for image class.
> > > So, move the implementation to image class and make it as a task.
> > >
> > > Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com
> > > <javascript:>> ---

Jan Kiszka

unread,
Feb 11, 2020, 1:07:48 PM2/11/20
to vijai kumar, isar-users
On 11.02.20 15:14, vijai kumar wrote:
>
>
> On Tuesday, February 11, 2020 at 5:08:43 PM UTC+5:30, Henning Schild wrote:
>
> This patch will allow you to keep your apt downloading feature
> downstream for yourself. I would say - propose it again together with
> the feature.
>
>
> I can push that feature if it is needed upstream, which I doubt, since I
> don't vision an upstream use-case where-in one would need to
> download all the sources.

Providing an "all sources for my target" target is surely an upstream topic.

>
> Also, It would need your base-apt series.
>
>
> In fact if the feature was in Isar the whole problem would go away,
> unless you have more postprocess functions.
>
>
> Yes. We have one per se for our QA layer. To export dpkg status file to
> deploy directory. This will be used by debsecan.
>
>
> We discussed postprocessing a couple of times, it is really bad style
> and enabling it as a feature that is easy to use we provoke downstream
> layers making mistakes by implementing their stuff as such postprocess
> functions.
>
>
> I see the ability to add custom post-processing as a useful feature.
> Not sure if anyone actually uses them in their downstream layers. It is
> good to have if you know what you are doing.
>
> As long as this provision is there, people would use it. If we feel that
> this
> provision is unnecessary and would lead to issues, well, we could go
> ahead and remove it.

Even if that hooking mechanism is a two-sided sword, there is already
inside Isar value in defining a clean environment for the hooks and
ensuring that this contains all mounts until the last hook is done. So,
this patch can only be seen as the messenger, not to be shot for
downstream misuse of the overall feature.

Vijai Kumar K

unread,
Feb 13, 2020, 5:08:35 AM2/13/20
to isar-...@googlegroups.com, Vijai Kumar K
With the current implementation it is difficult to append a
postprocess function which requires a chroot environment.
For example, to add a postprocess function which runs apt-get to
download all source of packages installed in the target.

rootfs_postprocess_finalize is not actually an optional feature
but instead a necessary cleanup function for image class.
So, move the implementation to image class and make it as a task.

Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
---
Changes in v2:
- Introduced additional patch to cache deb src
- Rebased on top of henning/staging4 tree

The git tree is available here.

https://github.com/vj-kumar/isar/tree/henning/staging4

meta/classes/image.bbclass | 41 ++++++++++++++++++++++++++++++++++++-
meta/classes/rootfs.bbclass | 39 -----------------------------------
2 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/meta/classes/image.bbclass b/meta/classes/image.bbclass
index cfd617a..c5fddba 100644
--- a/meta/classes/image.bbclass
+++ b/meta/classes/image.bbclass
@@ -60,7 +60,7 @@ image_do_mounts() {
}

ROOTFSDIR = "${IMAGE_ROOTFS}"
-ROOTFS_FEATURES += "clean-package-cache finalize-rootfs generate-manifest"
+ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest"
ROOTFS_PACKAGES += "${IMAGE_PREINSTALL} ${IMAGE_INSTALL}"
ROOTFS_MANIFEST_DEPLOY_DIR ?= "${DEPLOY_DIR_IMAGE}"

@@ -168,5 +168,44 @@ python do_deploy() {
index 54b5e5c..c3af7c1 100644
--- a/meta/classes/rootfs.bbclass
+++ b/meta/classes/rootfs.bbclass
@@ -201,45 +201,6 @@ rootfs_generate_manifest () {
--
2.17.1

Vijai Kumar K

unread,
Feb 13, 2020, 5:08:42 AM2/13/20
to isar-...@googlegroups.com, Vijai Kumar K
Collect the deb sources of the corresponding deb binaries cached
in DEBDIR as part of postprocess for those to be later included
into the final base-apt by do_cache.

Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
---
meta/classes/image.bbclass | 2 +-
meta/classes/rootfs.bbclass | 28 ++++++++++++++++++++++++++++
2 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/meta/classes/image.bbclass b/meta/classes/image.bbclass
index c5fddba..77306ce 100644
--- a/meta/classes/image.bbclass
+++ b/meta/classes/image.bbclass
@@ -60,7 +60,7 @@ image_do_mounts() {
}

ROOTFSDIR = "${IMAGE_ROOTFS}"
-ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest"
+ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest cache-deb-src"
ROOTFS_PACKAGES += "${IMAGE_PREINSTALL} ${IMAGE_INSTALL}"
ROOTFS_MANIFEST_DEPLOY_DIR ?= "${DEPLOY_DIR_IMAGE}"

diff --git a/meta/classes/rootfs.bbclass b/meta/classes/rootfs.bbclass
index c3af7c1..bef5149 100644
--- a/meta/classes/rootfs.bbclass
+++ b/meta/classes/rootfs.bbclass
@@ -201,6 +201,34 @@ rootfs_generate_manifest () {
${ROOTFS_MANIFEST_DEPLOY_DIR}/"${PF}".manifest
}

+ROOTFS_POSTPROCESS_COMMAND += "${@bb.utils.contains('ROOTFS_FEATURES', 'cache-deb-src', 'cache_deb_src', '', d)}"
+cache_deb_src() {
+ if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
+ return 0
+ fi
+ sudo -s <<'EOSUDO'
+ sudo cp -L /etc/resolv.conf '${ROOTFSDIR}/etc'
+ mkdir -p '${ROOTFSDIR}/deb-src'
+ mountpoint -q '${ROOTFSDIR}/deb-src' || \
+ mount --bind '${DEBSRCDIR}' '${ROOTFSDIR}/deb-src'
+EOSUDO
+ sudo -E chroot ${ROOTFSDIR} /usr/bin/apt-get update
+ find "${DEBDIR}"/"${DISTRO}" -name '*\.deb' | while read package; do
+ local pkg="$( dpkg-deb --show --showformat '${Package}' "${package}" )"
+ local dirname="$( dpkg-deb --show --showformat '${Source}' "${package}" )"
+ if [ -z "${dirname}" ];then
+ dirname="$pkg"
+ fi
+ sudo -E chroot --userspec=$( id -u ):$( id -g ) ${ROOTFSDIR} \
+ sh -c 'mkdir -p "/deb-src/${1}/${2}" && cd "/deb-src/${1}/${2}" && apt-get -y source --download-only "$3"' download-src "${DISTRO}" "${dirname}" "${pkg}"
+ done
+ sudo -s <<'EOSUDO'
+ mountpoint -q '${ROOTFSDIR}/deb-src' && \
+ umount -l ${ROOTFSDIR}/deb-src
+ sudo rm -rf '${ROOTFSDIR}/etc/resolv.conf'
+EOSUDO
+}
+

Vijai Kumar K

unread,
Feb 14, 2020, 12:48:11 AM2/14/20
to isar-...@googlegroups.com, Vijai Kumar K
With the current implementation it is difficult to append a
postprocess function which requires a chroot environment.
For example, to add a postprocess function which runs apt-get to
download all source of packages installed in the target.

rootfs_postprocess_finalize is not actually an optional feature
but instead a necessary cleanup function for image class.
So, move the implementation to image class and make it as a task.

Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
---
Changes in v3:
- Take care of non-existent downloads/deb-src directory.

meta/classes/image.bbclass | 41 ++++++++++++++++++++++++++++++++++++-
meta/classes/rootfs.bbclass | 39 -----------------------------------
2 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/meta/classes/image.bbclass b/meta/classes/image.bbclass
index cfd617a..c5fddba 100644
--- a/meta/classes/image.bbclass
+++ b/meta/classes/image.bbclass
@@ -60,7 +60,7 @@ image_do_mounts() {
}

ROOTFSDIR = "${IMAGE_ROOTFS}"
-ROOTFS_FEATURES += "clean-package-cache finalize-rootfs generate-manifest"
+ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest"
ROOTFS_PACKAGES += "${IMAGE_PREINSTALL} ${IMAGE_INSTALL}"
ROOTFS_MANIFEST_DEPLOY_DIR ?= "${DEPLOY_DIR_IMAGE}"

@@ -168,5 +168,44 @@ python do_deploy() {
}
addtask deploy before do_build after do_image

+do_rootfs_finalize() {
+ sudo -s <<'EOSUDO'
diff --git a/meta/classes/rootfs.bbclass b/meta/classes/rootfs.bbclass
index 54b5e5c..c3af7c1 100644
--- a/meta/classes/rootfs.bbclass
+++ b/meta/classes/rootfs.bbclass
@@ -201,45 +201,6 @@ rootfs_generate_manifest () {
${ROOTFS_MANIFEST_DEPLOY_DIR}/"${PF}".manifest
}

Vijai Kumar K

unread,
Feb 14, 2020, 12:48:14 AM2/14/20
to isar-...@googlegroups.com, Vijai Kumar K
Collect the deb sources of the corresponding deb binaries cached
in DEBDIR as part of postprocess for those to be later included
into the final base-apt by do_cache.

Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
---
meta/classes/image.bbclass | 2 +-
meta/classes/rootfs.bbclass | 29 +++++++++++++++++++++++++++++
2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/meta/classes/image.bbclass b/meta/classes/image.bbclass
index c5fddba..77306ce 100644
--- a/meta/classes/image.bbclass
+++ b/meta/classes/image.bbclass
@@ -60,7 +60,7 @@ image_do_mounts() {
}

ROOTFSDIR = "${IMAGE_ROOTFS}"
-ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest"
+ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest cache-deb-src"
ROOTFS_PACKAGES += "${IMAGE_PREINSTALL} ${IMAGE_INSTALL}"
ROOTFS_MANIFEST_DEPLOY_DIR ?= "${DEPLOY_DIR_IMAGE}"

diff --git a/meta/classes/rootfs.bbclass b/meta/classes/rootfs.bbclass
index c3af7c1..971a299 100644
--- a/meta/classes/rootfs.bbclass
+++ b/meta/classes/rootfs.bbclass
@@ -201,6 +201,35 @@ rootfs_generate_manifest () {
${ROOTFS_MANIFEST_DEPLOY_DIR}/"${PF}".manifest
}

+ROOTFS_POSTPROCESS_COMMAND += "${@bb.utils.contains('ROOTFS_FEATURES', 'cache-deb-src', 'cache_deb_src', '', d)}"
+cache_deb_src() {
+ if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
+ return 0
+ fi
+ sudo -s <<'EOSUDO'
+ sudo cp -L /etc/resolv.conf '${ROOTFSDIR}/etc'
+ mkdir -p "${DEBSRCDIR}"/"${DISTRO}"
+ mkdir -p '${ROOTFSDIR}/deb-src'
+ mountpoint -q '${ROOTFSDIR}/deb-src' || \
+ mount --bind '${DEBSRCDIR}' '${ROOTFSDIR}/deb-src'
+EOSUDO
+ sudo -E chroot ${ROOTFSDIR} /usr/bin/apt-get update
+ find "${DEBDIR}"/"${DISTRO}" -name '*\.deb' | while read package; do
+ local pkg="$( dpkg-deb --show --showformat '${Package}' "${package}" )"
+ local dirname="$( dpkg-deb --show --showformat '${Source}' "${package}" )"
+ if [ -z "${dirname}" ];then
+ dirname="$pkg"
+ fi
+ sudo -E chroot --userspec=$( id -u ):$( id -g ) ${ROOTFSDIR} \
+ sh -c 'mkdir -p "/deb-src/${1}/${2}" && cd "/deb-src/${1}/${2}" && apt-get -y source --download-only "$3"' download-src "${DISTRO}" "${dirname}" "${pkg}"
+ done
+ sudo -s <<'EOSUDO'
+ mountpoint -q '${ROOTFSDIR}/deb-src' && \
+ umount -l ${ROOTFSDIR}/deb-src
+ sudo rm -rf '${ROOTFSDIR}/etc/resolv.conf'
+EOSUDO
+}
+

Jan Kiszka

unread,
Feb 14, 2020, 3:19:31 AM2/14/20
to Vijai Kumar K, isar-...@googlegroups.com, Vijai Kumar K
On 14.02.20 06:48, Vijai Kumar K wrote:
> Collect the deb sources of the corresponding deb binaries cached
> in DEBDIR as part of postprocess for those to be later included
> into the final base-apt by do_cache.
>

So, inclusion into base-apt will come in a later patch? IOW: It's not
yet clear to me if this patch alone is already useful.

Thanks,
Jan

vijai kumar

unread,
Feb 14, 2020, 3:41:45 AM2/14/20
to isar-users


On Friday, February 14, 2020 at 1:49:31 PM UTC+5:30, Jan Kiszka wrote:
On 14.02.20 06:48, Vijai Kumar K wrote:
> Collect the deb sources of the corresponding deb binaries cached
> in DEBDIR as part of postprocess for those to be later included
> into the final base-apt by do_cache.
>

So, inclusion into base-apt will come in a later patch? IOW: It's not
yet clear to me if this patch alone is already useful.

Hi Jan,

The new base-apt rework from Henning would download all the deb and deb-srcs in downloads/deb
download/deb-src directories respectively. The actual repo(using reprepro) is not created until base-apt:do_cache is called. For which
you have to trigger a offline build and set ISAR_USE_CACHED_BASE_REPO.

This patch is an extension to the downloads/deb-src downloading, where in all the source files of all downloaded deb files will cached,
but for it to be converted to repo, base-apt.bb:do_cache needs to be called.

Thanks,
Vijai Kumar K

vijai kumar

unread,
Feb 14, 2020, 3:45:20 AM2/14/20
to isar-users


On Friday, February 14, 2020 at 2:11:45 PM UTC+5:30, vijai kumar wrote:


On Friday, February 14, 2020 at 1:49:31 PM UTC+5:30, Jan Kiszka wrote:
On 14.02.20 06:48, Vijai Kumar K wrote:
> Collect the deb sources of the corresponding deb binaries cached
> in DEBDIR as part of postprocess for those to be later included
> into the final base-apt by do_cache.
>

So, inclusion into base-apt will come in a later patch? IOW: It's not
yet clear to me if this patch alone is already useful.

Hi Jan,

The new base-apt rework from Henning would download all the deb and deb-srcs in downloads/deb
download/deb-src directories respectively. The actual repo(using reprepro) is not created until base-apt:do_cache is called. For which
you have to trigger a offline build and set ISAR_USE_CACHED_BASE_REPO.

This patch is an extension to the downloads/deb-src downloading, where in all the source files of all downloaded deb files will cached,
but for it to be converted to repo, base-apt.bb:do_cache needs to be called.

So, basically this is a middle piece of Henning's workflow.

vijai kumar

unread,
Mar 11, 2020, 3:16:51 AM3/11/20
to isar-users
As said before, this series, atleast the second patch, depends on Henning's base-apt series.

I am going to rebase my changes on top of Hennings v5 and test it out.

Also, should we have these changes as part of Hennings series? Or should I wait for
the series to get in and address these later?

Thanks,
Vijai Kumar K

On Friday, February 14, 2020 at 11:18:11 AM UTC+5:30, vijai kumar wrote:
With the current implementation it is difficult to append a
postprocess function which requires a chroot environment.
For example, to add a postprocess function which runs apt-get to
download all source of packages installed in the target.

rootfs_postprocess_finalize is not actually an optional feature
but instead a necessary cleanup function for image class.
So, move the implementation to image class and make it as a task.

Signed-off-by: Vijai Kumar K <Vijaikumar_Kanagarajan@mentor.com>

vijai kumar

unread,
Apr 1, 2020, 3:25:55 AM4/1/20
to isar-users, Baurzhan Ismagulov, Henning Schild, Jan Kiszka
On Wed, Mar 11, 2020 at 12:46 PM vijai kumar
<vijaikumar....@gmail.com> wrote:
>
> As said before, this series, atleast the second patch, depends on Henning's base-apt series.
>
> I am going to rebase my changes on top of Hennings v5 and test it out.
>
> Also, should we have these changes as part of Hennings series? Or should I wait for
> the series to get in and address these later?
>
Hi All,

While testing this series on top of the current next I got the below
error. Any pointers? I am yet to try a local build.

Failed to fetch
http://deb.debian.org/debian/pool/main/g/gettext/gettext_0.19.8.1.orig.tar.xz
Writing more data than expected (7210080 > 7209808)

Hashes of expected file:
- SHA256:105556dbc5c3fbbc2aa0edb46d22d055748b6f5c7cd7a8d99f8e7eb84e938be4
- Filesize:7209808 [weak]

- MD5Sum:df3f5690eaa30fd228537b00cb7b7590 [weak]
E: Failed to fetch some archives.

http://ci.isar-build.org:8080/job/isar_vkk_devel/40/consoleFull

Also, there are some more fixes, to strip the version info from Source
field, so v4 is in pipeline.

Thanks,
Vijai Kumar K


> Thanks,
> Vijai Kumar K
>
> On Friday, February 14, 2020 at 11:18:11 AM UTC+5:30, vijai kumar wrote:
>>
>> With the current implementation it is difficult to append a
>> postprocess function which requires a chroot environment.
>> For example, to add a postprocess function which runs apt-get to
>> download all source of packages installed in the target.
>>
>> rootfs_postprocess_finalize is not actually an optional feature
>> but instead a necessary cleanup function for image class.
>> So, move the implementation to image class and make it as a task.
>>
>> Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
> --
> You received this message because you are subscribed to the Google Groups "isar-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to isar-users+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/isar-users/7a71ebe9-9846-41ed-beae-a02360129b89%40googlegroups.com.

Henning Schild

unread,
Apr 1, 2020, 4:19:34 AM4/1/20
to vijai kumar, isar-users, Baurzhan Ismagulov, Jan Kiszka
On Wed, 1 Apr 2020 12:55:43 +0530
vijai kumar <vijaikumar....@gmail.com> wrote:

> On Wed, Mar 11, 2020 at 12:46 PM vijai kumar
> <vijaikumar....@gmail.com> wrote:
> >
> > As said before, this series, atleast the second patch, depends on
> > Henning's base-apt series.
> >
> > I am going to rebase my changes on top of Hennings v5 and test it
> > out.
> >
> > Also, should we have these changes as part of Hennings series? Or
> > should I wait for the series to get in and address these later?
> >
> Hi All,
>
> While testing this series on top of the current next I got the below
> error. Any pointers? I am yet to try a local build.

I guess the main question is whether that issue just came up once, or
whether it consist over several builds.

> Failed to fetch
> http://deb.debian.org/debian/pool/main/g/gettext/gettext_0.19.8.1.orig.tar.xz
> Writing more data than expected (7210080 > 7209808)
>
> Hashes of expected file:
> -
> SHA256:105556dbc5c3fbbc2aa0edb46d22d055748b6f5c7cd7a8d99f8e7eb84e938be4
> - Filesize:7209808 [weak]
>
> - MD5Sum:df3f5690eaa30fd228537b00cb7b7590 [weak]
> E: Failed to fetch some archives.

A fetch should not be affected by what is in next. I would guess/hope
that you just into a temporary network hickup.

Henning

vijai kumar

unread,
Apr 1, 2020, 6:29:54 AM4/1/20
to Henning Schild, isar-users, Baurzhan Ismagulov, Jan Kiszka
On Wed, Apr 1, 2020 at 1:49 PM Henning Schild
<henning...@siemens.com> wrote:
>
> On Wed, 1 Apr 2020 12:55:43 +0530
> vijai kumar <vijaikumar....@gmail.com> wrote:
>
> > On Wed, Mar 11, 2020 at 12:46 PM vijai kumar
> > <vijaikumar....@gmail.com> wrote:
> > >
> > > As said before, this series, atleast the second patch, depends on
> > > Henning's base-apt series.
> > >
> > > I am going to rebase my changes on top of Hennings v5 and test it
> > > out.
> > >
> > > Also, should we have these changes as part of Hennings series? Or
> > > should I wait for the series to get in and address these later?
> > >
> > Hi All,
> >
> > While testing this series on top of the current next I got the below
> > error. Any pointers? I am yet to try a local build.
>
> I guess the main question is whether that issue just came up once, or
> whether it consist over several builds.

It came up last night. I haven't seen these errors before.

>
> > Failed to fetch
> > http://deb.debian.org/debian/pool/main/g/gettext/gettext_0.19.8.1.orig.tar.xz
> > Writing more data than expected (7210080 > 7209808)
> >
> > Hashes of expected file:
> > -
> > SHA256:105556dbc5c3fbbc2aa0edb46d22d055748b6f5c7cd7a8d99f8e7eb84e938be4
> > - Filesize:7209808 [weak]
> >
> > - MD5Sum:df3f5690eaa30fd228537b00cb7b7590 [weak]
> > E: Failed to fetch some archives.
>
> A fetch should not be affected by what is in next. I would guess/hope
> that you just into a temporary network hickup.

I am assuming the same. I hit it in a couple of builds. Anyway my
local build got through. Triggered another job in CI to see if this
issues goes away.

Thanks,
Vijai Kumar K

vijai kumar

unread,
Apr 3, 2020, 2:50:54 AM4/3/20
to Henning Schild, isar-users, Baurzhan Ismagulov, Jan Kiszka
On Wed, Apr 1, 2020 at 3:59 PM vijai kumar
I am getting this fetcher issue consistently (packages differ) in the
ISAR CI build. All my local builds went through. I believe there is
more to it. Some proxy issues affecting apt fetch in CI?

Baurzhan Ismagulov

unread,
Apr 3, 2020, 4:30:49 AM4/3/20
to isar-users
On Fri, Apr 03, 2020 at 12:20:41PM +0530, vijai kumar wrote:
> > > A fetch should not be affected by what is in next. I would guess/hope
> > > that you just into a temporary network hickup.
> >
> > I am assuming the same. I hit it in a couple of builds. Anyway my
> > local build got through. Triggered another job in CI to see if this
> > issues goes away.
>
> I am getting this fetcher issue consistently (packages differ) in the
> ISAR CI build. All my local builds went through. I believe there is
> more to it. Some proxy issues affecting apt fetch in CI?

We had various network issues before, but I haven't seen such one till now.
At least Isar (next) fast built fine till now. Seems that applying your patch
uncovers some issue...

With kind regards,
Baurzhan.

vijai kumar

unread,
Apr 3, 2020, 4:50:34 AM4/3/20
to isar-users
Yes. the apt-get source download call in my patch is what fails with
the below error.

"Writing more data than expected (<size> > <size>)"

A quick google for similar issues got me to the below link which
recommends to use a set of apt options. I tried but without success.

https://github.com/jenkinsci/docker/issues/543

Thanks,
Vijai Kumar K


>
> With kind regards,
> Baurzhan.
>
> --
> You received this message because you are subscribed to the Google Groups "isar-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to isar-users+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/isar-users/20200403083044.tzpei3ncng4aztqk%40yssyq.m.ilbers.de.

Vijai Kumar K

unread,
Apr 3, 2020, 9:06:07 AM4/3/20
to isar-...@googlegroups.com, Vijai Kumar K
With the current implementation it is difficult to append a
postprocess function which requires a chroot environment.
For example, to add a postprocess function which runs apt-get to
download all source of packages installed in the target.

rootfs_postprocess_finalize is not actually an optional feature
but instead a necessary cleanup function for image class.
So, move the implementation to image class and make it as a task.

Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
---
Changes in v4:
- Use <source package>=<version> format instead of just using <packagename>
to download the right version of source package.

meta/classes/image.bbclass | 41 ++++++++++++++++++++++++++++++++++++-
meta/classes/rootfs.bbclass | 39 -----------------------------------
2 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/meta/classes/image.bbclass b/meta/classes/image.bbclass
index 96ba863..9fa58f8 100644
--- a/meta/classes/image.bbclass
+++ b/meta/classes/image.bbclass
@@ -60,7 +60,7 @@ image_do_mounts() {
}

ROOTFSDIR = "${IMAGE_ROOTFS}"
-ROOTFS_FEATURES += "clean-package-cache finalize-rootfs generate-manifest"
+ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest"
ROOTFS_PACKAGES += "${IMAGE_PREINSTALL} ${IMAGE_INSTALL}"
ROOTFS_MANIFEST_DEPLOY_DIR ?= "${DEPLOY_DIR_IMAGE}"

@@ -171,5 +171,44 @@ python do_deploy() {
index 806e824..8bb003d 100644

Vijai Kumar K

unread,
Apr 3, 2020, 9:06:11 AM4/3/20
to isar-...@googlegroups.com, Vijai Kumar K
Collect the deb sources of the corresponding deb binaries cached
in DEBDIR as part of postprocess for those to be later included
into the final base-apt by do_cache.

Signed-off-by: Vijai Kumar K <Vijaikumar_...@mentor.com>
---
meta/classes/image.bbclass | 2 +-
meta/classes/rootfs.bbclass | 46 +++++++++++++++++++++++++++++++++++++
2 files changed, 47 insertions(+), 1 deletion(-)

diff --git a/meta/classes/image.bbclass b/meta/classes/image.bbclass
index 9fa58f8..1c7a527 100644
--- a/meta/classes/image.bbclass
+++ b/meta/classes/image.bbclass
@@ -60,7 +60,7 @@ image_do_mounts() {
}

ROOTFSDIR = "${IMAGE_ROOTFS}"
-ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest"
+ROOTFS_FEATURES += "copy-package-cache clean-package-cache generate-manifest cache-deb-src"
ROOTFS_PACKAGES += "${IMAGE_PREINSTALL} ${IMAGE_INSTALL}"
ROOTFS_MANIFEST_DEPLOY_DIR ?= "${DEPLOY_DIR_IMAGE}"

diff --git a/meta/classes/rootfs.bbclass b/meta/classes/rootfs.bbclass
index 8bb003d..7bfdfc9 100644
--- a/meta/classes/rootfs.bbclass
+++ b/meta/classes/rootfs.bbclass
@@ -201,6 +201,52 @@ rootfs_generate_manifest () {
${ROOTFS_MANIFEST_DEPLOY_DIR}/"${PF}".manifest
}

+ROOTFS_POSTPROCESS_COMMAND += "${@bb.utils.contains('ROOTFS_FEATURES', 'cache-deb-src', 'cache_deb_src', '', d)}"
+cache_deb_src() {
+ if [ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ]; then
+ return 0
+ fi
+
+ mkdir -p "${DEBSRCDIR}"/"${DISTRO}"
+
+ sudo -s <<'EOSUDO'
+ cp -L /etc/resolv.conf '${ROOTFSDIR}/etc'
+ mkdir -p '${ROOTFSDIR}/deb-src'
+ mountpoint -q '${ROOTFSDIR}/deb-src' || \
+ mount --bind '${DEBSRCDIR}' '${ROOTFSDIR}/deb-src'
+EOSUDO
+
+ sudo -E chroot ${ROOTFSDIR} /usr/bin/apt-get update
+
+ find "${DEBDIR}"/"${DISTRO}" -name '*\.deb' | while read package; do
+ local src="$( dpkg-deb --show --showformat '${Source}' "${package}" )"
+ # If the binary package version and source package version are different, then the
+ # source package version will be present inside "()" of the Source field.
+ local version="$( echo "$src" | cut -sd "(" -f2 | cut -sd ")" -f1 )"
+ if [ -z ${version} ]; then
+ version="$( dpkg-deb --show --showformat '${Version}' "${package}" )"
+ fi
+ # Now strip any version information that might be available.
+ src="$( echo "$src" | cut -d' ' -f1 )"
+ # If there is no source field, then the source package has the same name as the
+ # binary package.
+ if [ -z "${src}" ];then
+ src="$( dpkg-deb --show --showformat '${Package}' "${package}" )"
+ fi
+
+ sudo -E chroot --userspec=$( id -u ):$( id -g ) ${ROOTFSDIR} \
+ sh -c 'mkdir -p "/deb-src/${1}/${2}" && cd "/deb-src/${1}/${2}" && \
+ apt-get -y --download-only --only-source source "$2"="$3"' \
+ download-src "${DISTRO}" "${src}" "${version}"
+ done
+
+ sudo -s <<'EOSUDO'
+ mountpoint -q '${ROOTFSDIR}/deb-src' && \
+ umount -l ${ROOTFSDIR}/deb-src
+ rm -rf '${ROOTFSDIR}/etc/resolv.conf'
+EOSUDO
+}
+

vijai kumar

unread,
Apr 7, 2020, 2:19:29 AM4/7/20
to isar-users


On Friday, April 3, 2020 at 2:20:34 PM UTC+5:30, vijai kumar wrote:
On Fri, Apr 3, 2020 at 2:00 PM Baurzhan Ismagulov <i...@radix50.net> wrote:
>
> On Fri, Apr 03, 2020 at 12:20:41PM +0530, vijai kumar wrote:
> > > > A fetch should not be affected by what is in next. I would guess/hope
> > > > that you just into a temporary network hickup.
> > >
> > > I am assuming the same. I hit it in a couple of builds. Anyway my
> > > local build got through. Triggered another job in CI to see if this
> > > issues goes away.
> >
> > I am getting this fetcher issue consistently (packages differ) in the
> > ISAR CI build. All my local builds went through. I believe there is
> > more to it. Some proxy issues affecting apt fetch in CI?
>
> We had various network issues before, but I haven't seen such one till now.
> At least Isar (next) fast built fine till now. Seems that applying your patch
> uncovers some issue...

Yes. the apt-get source download call in my patch is what fails with
the below error.

"Writing more data than expected (<size> > <size>)"

A quick google for similar issues got me to the below link which
recommends to use a set of apt options. I tried but without success.

https://github.com/jenkinsci/docker/issues/543

Thanks,
Vijai Kumar K


Hi Baurzhan,

The problem still exists in CI[1]. I am trying to root cause it. I am not sure how much I can proceed, since its mostly a black box for me.

E
: Failed to fetch http://deb.debian.org/debian/pool/main/l/linux/linux_4.9.210.orig.tar.xz  Writing more data than expected (94933088 > 94867552)

[1]http://ci.isar-build.org:8080/job/isar_vkk_devel/49/consoleText

Thanks,
Vijai Kumar K



>
> With kind regards,
> Baurzhan.
>
> --
> You received this message because you are subscribed to the Google Groups "isar-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to isar-users+unsubscribe@googlegroups.com.

Jan Kiszka

unread,
Apr 7, 2020, 2:44:56 AM4/7/20
to Vijai Kumar K, isar-...@googlegroups.com, Vijai Kumar K
How did you construct this apt-get command? I'm trying to match it
against the man page of apt-get but there is no reference to "download-src".

Jan

> + done
> +
> + sudo -s <<'EOSUDO'
> + mountpoint -q '${ROOTFSDIR}/deb-src' && \
> + umount -l ${ROOTFSDIR}/deb-src
> + rm -rf '${ROOTFSDIR}/etc/resolv.conf'
> +EOSUDO
> +}
> +
> do_rootfs_postprocess[vardeps] = "${ROOTFS_POSTPROCESS_COMMAND}"
> python do_rootfs_postprocess() {
> # Take care that its correctly mounted:
>

--

Jan Kiszka

unread,
Apr 7, 2020, 2:45:47 AM4/7/20
to vijai kumar, isar-users
I'll stick this into our CI as well to see if it reproduces there.

Is the error on different packages or always on this one?

Jan

vijai kumar

unread,
Apr 7, 2020, 2:53:55 AM4/7/20
to Jan Kiszka, isar-users
Thanks.

> Is the error on different packages or always on this one?

It is on different packages. I have seen glibc and some other packages fail.

Best,
Vijai Kumar K

vijai kumar

unread,
Apr 7, 2020, 2:59:09 AM4/7/20
to Jan Kiszka, isar-users, Vijai Kumar K
download-src is just the script name.
The shell script call is in below format to pass the args.

sh -c '<commands>' <script name> <args>

Thanks,
Vijai Kumar K

Jan Kiszka

unread,
Apr 7, 2020, 3:04:17 AM4/7/20
to vijai kumar, isar-users, Vijai Kumar K
Oh, I see. What's the benefit of this obfuscation? Or is there even any
technical need?

Jan

Baurzhan Ismagulov

unread,
Apr 7, 2020, 3:12:39 AM4/7/20
to isar-users
Hello Vijai Kumar,

On Mon, Apr 06, 2020 at 11:19:29PM -0700, vijai kumar wrote:
> The problem still exists in CI[1]. I am trying to root cause it. I am not
> sure how much I can proceed, since its mostly a black box for me.

Ok, so the problem is reproducible. We've set up the CI so that you should be
able to configure your jobs. Could you please verify that you have the
"Configure" entry in the job local menu?

Since the full build takes a long time, I suggest adding -f to both
build_task.sh and smoke_test_task.sh and see whether the problem is
reproducible with cross-build.

build_task.sh starts ci_build.sh with the right host version (stretch or
buster). smoke_test_task.sh does the same for vm_smoke_test.

What else can I do to help you? Please let me know.

We already plan re-evaluating GitLab (that was our first CI and we had severe
performance issues with that) and trying gitlab.com. The goal is that
everything is configurable from the Isar repo and usable from both Jenkins and
GitLab. But that would take some time.

With kind regards,
Baurzhan.

vijai kumar

unread,
Apr 7, 2020, 3:59:21 AM4/7/20
to Jan Kiszka, isar-users, Vijai Kumar K
Yes. To run multiple commands from a single chroot call. We can either
invoke sh -c or have a script that contains the commands that could be
called.
We are using sh -c and hence the need for arguments to pass in the variables.
It could very well be multiple chroot calls or a script but this is
more clean I guess.
Also, to note, this style was inherited from the base-apt series by Henning.

Thanks,
Vijai Kumar K

vijai kumar

unread,
Apr 7, 2020, 4:04:12 AM4/7/20
to isar-users
On Tue, Apr 7, 2020 at 12:42 PM Baurzhan Ismagulov <i...@radix50.net> wrote:
>
> Hello Vijai Kumar,
>
> On Mon, Apr 06, 2020 at 11:19:29PM -0700, vijai kumar wrote:
> > The problem still exists in CI[1]. I am trying to root cause it. I am not
> > sure how much I can proceed, since its mostly a black box for me.
>
> Ok, so the problem is reproducible. We've set up the CI so that you should be
> able to configure your jobs. Could you please verify that you have the
> "Configure" entry in the job local menu?

Yes. I do have that.

>
> Since the full build takes a long time, I suggest adding -f to both
> build_task.sh and smoke_test_task.sh and see whether the problem is
> reproducible with cross-build.

Sure.

>
> build_task.sh starts ci_build.sh with the right host version (stretch or
> buster). smoke_test_task.sh does the same for vm_smoke_test.
>
> What else can I do to help you? Please let me know.

Jan has also triggered a build in an internal CI. I will have a look
on both of this and will get back. Thanks.

>
> We already plan re-evaluating GitLab (that was our first CI and we had severe
> performance issues with that) and trying gitlab.com. The goal is that
> everything is configurable from the Isar repo and usable from both Jenkins and
> GitLab. But that would take some time.

Good to know. Jenkins is good for now. I have not faced much issues
with it till now. And this too is not jenkins specific I guess.

Best,
Vijai Kumar K

>
> With kind regards,
> Baurzhan.
>
> --
> You received this message because you are subscribed to the Google Groups "isar-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to isar-users+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/isar-users/20200407071234.uaefr7nhpqi74qfr%40yssyq.m.ilbers.de.

Jan Kiszka

unread,
Apr 7, 2020, 4:38:35 AM4/7/20
to vijai kumar, isar-users, Vijai Kumar K
It's not technically needed, giving the commands a name is optional. You
could also unfold the arguments (which would make them more readable).

> Also, to note, this style was inherited from the base-apt series by Henning.
>

Well, if we keep that pattern, then please indent in a more
reader-friendly way:

sh -c 'mkdir -p "/deb-src/${1}/${2}" && cd "/deb-src/${1}/${2}" && \
apt-get -y --download-only --only-source source "$2"="$3"' \
download-src "${DISTRO}" "${src}" "${version}"

vijai kumar

unread,
Apr 7, 2020, 5:08:23 AM4/7/20
to Jan Kiszka, isar-users, Vijai Kumar K
Ah. Just tried that. Looks like unfolding is just possible. The man
page says read from string, so the
variables part of the string are expanded. Its not like a subshell, if
I understand correctly now.
In that case unfolding does make sense.

Thanks,
Vijai Kumar K

vijai kumar

unread,
Apr 7, 2020, 5:40:09 AM4/7/20
to isar-users


On Tuesday, April 7, 2020 at 2:38:23 PM UTC+5:30, vijai kumar wrote:
On Tue, Apr 7, 2020 at 2:08 PM Jan Kiszka <jan.k...@siemens.com> wrote:
>
> On 07.04.20 09:59, vijai kumar wrote:
> > On Tue, Apr 7, 2020 at 12:34 PM Jan Kiszka <jan.k...@siemens.com> wrote:
> >>
> >> On 07.04.20 08:58, vijai kumar wrote:
> >>> On Tue, Apr 7, 2020 at 12:14 PM Jan Kiszka <jan.k...@siemens.com> wrote:
> >>>>
> >>>> On 03.04.20 15:05, Vijai Kumar K wrote:
> >>>>> Collect the deb sources of the corresponding deb binaries cached
> >>>>> in DEBDIR as part of postprocess for those to be later included
> >>>>> into the final base-apt by do_cache.
> >>>>>
> >>>>> Signed-off-by: Vijai Kumar K <Vijaikumar_Kanagarajan@mentor.com>

There is atleast one issue in this series. Need to take care when HOST_DISTRO!=DISTRO. Issue first identified in rpi-stretch cross compilation.

Thanks,
Vijai Kumar K

Baurzhan Ismagulov

unread,
Apr 8, 2020, 4:13:19 AM4/8/20
to isar-users
Hello Vijai Kumar,

On Tue, Apr 07, 2020 at 02:40:09AM -0700, vijai kumar wrote:
> There is atleast one issue in this series. Need to take care when
> HOST_DISTRO!=DISTRO. Issue first identified in rpi-stretch cross
> compilation.

Do I understand correctly, I should wait for v5?

With kind regards,
Baurzhan.

Henning Schild

unread,
Apr 8, 2020, 6:04:30 AM4/8/20
to Vijai Kumar K, isar-...@googlegroups.com, Vijai Kumar K
Am Fri, 3 Apr 2020 18:35:51 +0530
schrieb Vijai Kumar K <vijaikumar....@gmail.com>:
Should the source packages not all end up in the cache, so they can and
probably should be fetched from there.
Looks like we are going online without proxy configuration here. It
also needs a BB_NO_NETWORK guard.

And i would suggest to generate the list of things you want to fetch,
factor out the fetcher from dpkg-base and reuse is instead of copying
it.

And i would personally like a new series of patches to be sent without
"in-reply-to". Maybe its my client but i find these deeply nested
threads very hard to follow.

Henning

vijai kumar

unread,
Apr 8, 2020, 6:04:58 AM4/8/20
to isar-users
Yes Baurzhan.

>
> With kind regards,
> Baurzhan.
>
> --
> You received this message because you are subscribed to the Google Groups "isar-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to isar-users+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/isar-users/20200408081315.hb3m2vrs5bbtnoif%40yssyq.m.ilbers.de.

vijai kumar

unread,
Apr 8, 2020, 6:37:28 AM4/8/20
to Henning Schild, isar-users, Vijai Kumar K
Sorry. But I am not able to understand this. Can you please explain it again?
Will take care of that.

>
> And i would suggest to generate the list of things you want to fetch,
> factor out the fetcher from dpkg-base and reuse is instead of copying
> it.

Sure. I will have a look into how I can reuse that part.

>
> And i would personally like a new series of patches to be sent without
> "in-reply-to". Maybe its my client but i find these deeply nested
> threads very hard to follow.

No Problem. Will send the next series separately.

Thanks,
Vijai Kumar K

Henning Schild

unread,
Apr 8, 2020, 8:30:11 AM4/8/20
to vijai kumar, isar-users, Vijai Kumar K
Am Wed, 8 Apr 2020 16:07:15 +0530
schrieb vijai kumar <vijaikumar....@gmail.com>:
A first build without the cache will fetch all sources and drop them
into "${DEBSRCDIR}"/"${DISTRO}", just the the apt:// fetcher does.
A second build with an enabled cache will place all those src-pkgs in
base-apt (see populate_base_apt repo_add_srcpackage loop).

So a second run of this function here should be able to fetch all those
srcs-pkgs from base-apt. And it would be a good idea to actually do
that to prove that everything is available offline.

Note that for real offline BB_NO_NETWORK would be required. And that
"guard" should still be able to download from base-apt. Thinking about
it again ... i think you do not need the guard. If all src-pkgs are
available offline this function will never access the internet, if it
still tries the invalid proxy "guard" from isar_export_proxies will
trigger.

I think it boils down to removing the
[ "${ISAR_USE_CACHED_BASE_REPO}" = "1" ] && exit 0
and passing the ci offline/cache test

> >
> > > + mkdir -p "${DEBSRCDIR}"/"${DISTRO}"
> > > +
> > > + sudo -s <<'EOSUDO'
> > > + cp -L /etc/resolv.conf '${ROOTFSDIR}/etc'
> > > + mkdir -p '${ROOTFSDIR}/deb-src'
> > > + mountpoint -q '${ROOTFSDIR}/deb-src' || \
> > > + mount --bind '${DEBSRCDIR}' '${ROOTFSDIR}/deb-src'
> > > +EOSUDO
> > > +
> > > + sudo -E chroot ${ROOTFSDIR} /usr/bin/apt-get update

Why is that in here? Doing this in the image is not allowed, only for
isar-apt!

> > > + find "${DEBDIR}"/"${DISTRO}" -name '*\.deb' | while read
> > > package; do

You are reading this without grabbing the lock. In multiconfig other
images might be filling that directory as you read it. And you might be
calling dpkg-deb on half copied files.

Try deb_dl_dir_import and looping over /var/cache/apt/archives/ ... in
which case you will find yourself dealing with isar-apt packages that
you need to skip.
In fact you should use the manifest as input to not download packages
installed in other images with the same distro but without the feature.

Yeahh multiconfig!

Henning

vijai kumar

unread,
Apr 8, 2020, 9:32:34 AM4/8/20
to isar-users, Baurzhan Ismagulov, Henning Schild
On Wed, Apr 8, 2020 at 3:34 PM vijai kumar
<vijaikumar....@gmail.com> wrote:
>
> On Wed, Apr 8, 2020 at 1:43 PM Baurzhan Ismagulov <i...@radix50.net> wrote:
> >
> > Hello Vijai Kumar,
> >
> > On Tue, Apr 07, 2020 at 02:40:09AM -0700, vijai kumar wrote:
> > > There is atleast one issue in this series. Need to take care when
> > > HOST_DISTRO!=DISTRO. Issue first identified in rpi-stretch cross
> > > compilation.
> >
> > Do I understand correctly, I should wait for v5?
>
> Yes Baurzhan.

Hi Baurzhan,

On a second thought, I believe deb src caching can be addressed in a
separate series. I see that a couple of more patches would be needed
apart from the P2 of this series. P1 can be merged, if there are no
review comments though. It is a feature by itself and has no hard
dependency on deb-src series as such.

I will start a separate series for src caching taking into account the
review comments gotten so far. Please let me know if this is ok.

Thanks,
Vijai Kumar K

vijai kumar

unread,
Apr 15, 2020, 2:44:47 AM4/15/20
to isar-users, Baurzhan Ismagulov
Hi Baurzhan,

Is the below okay? Or should I send that with the new debsrc series?

Thanks,
Vijai Kumar K

On Wed, Apr 8, 2020 at 7:02 PM vijai kumar

Jan Kiszka

unread,
Apr 15, 2020, 3:28:36 AM4/15/20
to vijai kumar, isar-users, Baurzhan Ismagulov
On 15.04.20 08:44, vijai kumar wrote:
> Hi Baurzhan,
>
> Is the below okay? Or should I send that with the new debsrc series?

I would say add a full feature first because Isar users cannot really
benefit from the series on its own yet, can they?

We need a way to feed the fetched sources into a repo or a recipe that
generates a shippable OSS medium corresponding to a binary image or a
script that applies patches to the original sources so that the result
can be pushed to OSS license scanners. I.e. we need an in-tree use case
with a test case.

Jan

vijai kumar

unread,
Apr 15, 2020, 8:29:24 AM4/15/20
to Henning Schild, isar-users, Vijai Kumar K
On Wed, Apr 8, 2020 at 6:00 PM Henning Schild
Hi Henning,

I am sorry. But why is it not allowed? Am I missing any side effects of this
call?

Thanks,
Vijai Kumar K

vijai kumar

unread,
Apr 15, 2020, 9:20:18 AM4/15/20
to Jan Kiszka, isar-users, Baurzhan Ismagulov
On Wed, Apr 15, 2020 at 12:58 PM Jan Kiszka <jan.k...@siemens.com> wrote:
>
> On 15.04.20 08:44, vijai kumar wrote:
> > Hi Baurzhan,
> >
> > Is the below okay? Or should I send that with the new debsrc series?
>
> I would say add a full feature first because Isar users cannot really
> benefit from the series on its own yet, can they?

Patch 1 had its own use case though. It paves way for downstream
layers to add their own postprocess functions which rely on a working
chroot. Atleast I had one usecase downstream(deb-src caching). Since
deb-src caching is also moving upstream I dont really see any usecase
for P1. Postprocess commands can be ordered in such a way that caching
happens before finalize. The patch still paves way for the
functionality said above for downstream layers. But now I don't have
any use cases. It is just a good to have feature now.

>
> We need a way to feed the fetched sources into a repo or a recipe that
> generates a shippable OSS medium corresponding to a binary image or a
> script that applies patches to the original sources so that the result
> can be pushed to OSS license scanners. I.e. we need an in-tree use case
> with a test case.

If I understand correctly, are we also planning for the OSS clearance(
tar containing source files with patches) code to be upstream?

Thanks,
Vijai Kumar K

Jan Kiszka

unread,
Apr 15, 2020, 9:44:53 AM4/15/20
to vijai kumar, isar-users, Baurzhan Ismagulov
On 15.04.20 15:20, vijai kumar wrote:
> On Wed, Apr 15, 2020 at 12:58 PM Jan Kiszka <jan.k...@siemens.com> wrote:
>>
>> On 15.04.20 08:44, vijai kumar wrote:
>>> Hi Baurzhan,
>>>
>>> Is the below okay? Or should I send that with the new debsrc series?
>>
>> I would say add a full feature first because Isar users cannot really
>> benefit from the series on its own yet, can they?
>
> Patch 1 had its own use case though. It paves way for downstream
> layers to add their own postprocess functions which rely on a working
> chroot. Atleast I had one usecase downstream(deb-src caching). Since
> deb-src caching is also moving upstream I dont really see any usecase
> for P1. Postprocess commands can be ordered in such a way that caching
> happens before finalize. The patch still paves way for the
> functionality said above for downstream layers. But now I don't have
> any use cases. It is just a good to have feature now.

I'm not arguing for dropping patch 1. I think we had the discussion
already that it's cleaner to hard-encode that the final step is final.

>
>>
>> We need a way to feed the fetched sources into a repo or a recipe that
>> generates a shippable OSS medium corresponding to a binary image or a
>> script that applies patches to the original sources so that the result
>> can be pushed to OSS license scanners. I.e. we need an in-tree use case
>> with a test case.
>
> If I understand correctly, are we also planning for the OSS clearance(
> tar containing source files with patches) code to be upstream?

I was just listing possible example. Maybe the first one is of most
common interest. However, preparing one or more "flat" (patches applied)
code archives is not an uncommon requirement when you need to feed one
of those license scanners, may they be free - like FOSSology or scancode
- or commercial (not wanna promote any of those).

Jan

Henning Schild

unread,
Apr 15, 2020, 2:20:02 PM4/15/20
to vijai kumar, isar-users, Vijai Kumar K
Thanks for asking, please keep doing that when things are unclear.

An "update" stores a copy of the "view on the repo world" in the image.
It is essentially a copy of the Packages.gz or Sources.gz of all repos.
That information changes over time on the servers, while they still
(hopefully) offer to download packages referenced in older version of
those indexes.

Isar relies on that. It fetches all indexes exactly once and later
downloads packages found in the cached versions. Once you update an
index the "view of the world" moves away from "the state of the image".

On a living debian system you would always upgrade packages after
update-ing the indexes. In an "installer" - like Isar - you probably do
not want those dynamics.

So in order to keep "the state of the image" and "the view of the
world" in sync we never "apt-get update" ... except for isar-apt which
is a repo we can/do control.

If a build takes a really long time, there is a slim chance that we can
not actually fetch packages found in our old indexes because upstream
does not provide them anymore. I have not seen real evidence of that
potential problem. It could however manifest if we have a long running
build ... arm without cross ... and do additional fetches in postinst
... like you are implementing.
But whatever you can not fetch in the end, is probably not worth
fetching because it is not what was used to construct your image.

Henning

vijai kumar

unread,
Apr 16, 2020, 11:58:06 AM4/16/20
to Henning Schild, isar-users, Vijai Kumar K
On Wed, Apr 15, 2020 at 11:50 PM Henning Schild
Sure. Definitely.

>
> An "update" stores a copy of the "view on the repo world" in the image.
> It is essentially a copy of the Packages.gz or Sources.gz of all repos.
> That information changes over time on the servers, while they still
> (hopefully) offer to download packages referenced in older version of
> those indexes.
>
> Isar relies on that. It fetches all indexes exactly once and later
> downloads packages found in the cached versions. Once you update an
> index the "view of the world" moves away from "the state of the image".
>
> On a living debian system you would always upgrade packages after
> update-ing the indexes. In an "installer" - like Isar - you probably do
> not want those dynamics.
>
> So in order to keep "the state of the image" and "the view of the
> world" in sync we never "apt-get update" ... except for isar-apt which
> is a repo we can/do control.
>
> If a build takes a really long time, there is a slim chance that we can
> not actually fetch packages found in our old indexes because upstream
> does not provide them anymore. I have not seen real evidence of that
> potential problem. It could however manifest if we have a long running
> build ... arm without cross ... and do additional fetches in postinst
> ... like you are implementing.
> But whatever you can not fetch in the end, is probably not worth
> fetching because it is not what was used to construct your image.

Thank you for the explanation Henning. It would be good if this
information is documented somewhere. ;)

Best,
Vijai Kumar K

Henning Schild

unread,
Apr 16, 2020, 1:29:10 PM4/16/20
to vijai kumar, isar-users, Vijai Kumar K
On Thu, 16 Apr 2020 21:27:54 +0530
Excellent idea! How about you rewrite it in your own words and send a
patch. I will review that. Why you and your own words ... well you will
be the first test audience to make sure its good docs.

I mean that partly as a joke and partly serious, patches are really
welcome.

Henning
Reply all
Reply to author
Forward
0 new messages