Need (XCC) in the subject, since this requires a toolchain/glibc
rebuild.
> -struct serialized_data* serialize_argv_envp(char* const* argv,
> - char* const* envp)
> +struct serialized_data* serialize_argv_envp(const char * const *argv,
> + const char * const *envp)
->
The pointer * should be moved to be adjacent to the function name.
There are a few other warnings like this in this patch and in some of
the others. Please run checkpatch to catch them.
WARNING: Do not use whitespace before Signed-off-by:
> Added libpfm4 library support. The libpfm4 library provides a database
> of counters available on different Intel platforms.
Where did this library come from? (URL, package name, etc).
> --- a/Makefile
> +++ b/Makefile
> @@ -573,7 +573,7 @@ endif #ifeq ($(mixed-targets),1)
> # List all userspace directories here, and state any dependencies between them,
> # such as how pthread depends on parlib.
>
> -user-dirs = parlib pthread benchutil iplib ndblib vmm
> +user-dirs = parlib pthread benchutil iplib ndblib vmm perfmon
Does perfmon have any dependencies, such as on parlib or pthead? If so,
we'll need entries like this:
benchutil: parlib
pthread: parlib benchutil
iplib: parlib
> From 031a13e37081495c987b414b4b07902058fc5810 Mon Sep 17 00:00:00 2001
> From: Davide Libenzi <dlib...@google.com>
> Date: Mon, 16 Nov 2015 14:53:23 -0800
> Subject: Added ability to declare local per CPU variables
> +#define DEFINE_PERCPU(type, var) \
> + type PERCPU_VARNAME(var) __attribute__ ((section (PERCPU_SECTION_STR)))
> +#define DECLARE_PERCPU(type, var) \
> + extern type PERCPU_VARNAME(var) \
> + __attribute__ ((section (PERCPU_SECTION_STR)))
Mostly curious, does the declaration also need the attribute? I tried
with and without on a PERCPU that I dropped in init.c and it didn't seem
to care one way or the other.
> +struct msr_address {
> + uint32_t addr;
> + const uint32_t *addresses;
> + size_t num_addresses;
> +};
So a struct msr_address has both an overall address, as well as an address per
core (based on its usage later)? When does each core need a different address
for a particular MSR (e.g. MSR_IA32_PERFCTR0)?
Or is this for something else, like "I want to read different MSRs on different
cores, but read them all at the same time." I'm trying to picture how this is
used and what it's used for. If it's cryptic, we can put something in this
header file's comments at the top.
> @@ -362,22 +363,28 @@ static long archread(struct chan *c, void *a, long n, int64_t offset)
> + msr_set_values(&msrv, values, num_cores);
> +
> + err = msr_cores_read(&cset, &msra, &msrv);
> +
> + n = -1;
> + if (likely(!err)) {
> + if (n >= num_cores * sizeof(uint64_t)) {
> + if (!memcpy_to_user_errno(current, a, values,
> + num_cores * sizeof(uint64_t)))
> + n = num_cores * sizeof(uint64_t);
> } else {
What is going on here? n == -1, then we see if it is great enough to cover the
whole range of our output (which is always positive)?
> @@ -458,13 +467,19 @@ static long archwrite(struct chan *c, void *a, long n, int64_t offset)
> if (!address_range_find(msr_wr_wlist, ARRAY_SIZE(msr_wr_wlist),
> (uintptr_t) offset))
> error(EPERM, NULL);
> - core_set_init(&cset);
> - core_set_fill_available(&cset);
> if (n != sizeof(uint64_t))
> error(EINVAL, NULL);
> if (memcpy_from_user_errno(current, &value, a, sizeof(value)))
> return -1;
> - err = msr_cores_write(&cset, (uint32_t) offset, value);
> +
> + core_set_init(&cset);
> + core_set_fill_available(&cset);
> + msr_init_address(&msra);
> + msr_set_address(&msra, (uint32_t) offset);
> + msr_init_value(&msrv);
> + msr_set_value(&msrv, value);
It's a little odd to have to init_value() and then set_value(). I looked at the
functions, so I know that msr_init_value() zero's the msr_value struct, while
msr_set_value() sets the value field of the msr_value struct.
> --- a/kern/arch/x86/perfmon.c
> +++ b/kern/arch/x86/perfmon.c
> +static void perfmon_do_cores_alloc(void *opaque)
> +{
> + perfmon_enable_fix_event(i, TRUE);
> +
> + write_msr(MSR_CORE_PERF_FIXED_CTR0 + i,
> + -(int64_t) pa->ev.trigger_count);
> + write_msr(MSR_CORE_PERF_FIXED_CTR_CTRL, tmp);
> + }
Should the enabling of the event happen before or after setting the event? If
it happens before (as is here) would we collect perf events from the old setting
of the fixed counter? Same goes for down below in the non-fixed counter.
perfmon_do_cores_free() looks okay (disables the event before changing the
fields).
> + } else {
> + for (i = 0; (i < (int) cpu_caps.counters_x_proc) &&
> + ((cctx->counters[i].u.v != 0) ||
> + !perfmon_event_available(i)); i++)
> + ;
That's pretty nasty. I get the impression we want to find the first free
counter (i = idx). It'd be much clearer with the loop just going over i, and
adding breaks or continues or something.
Is there ever a time that the counters[i] value is 0, but the perfmon_event is
not available? I.e., is that a kernel bug, and thus should be a panic?
> +static void perfmon_do_cores_free(void *opaque)
> +{
> + struct perfmon_alloc *pa = (struct perfmon_alloc *) opaque;
> + struct perfmon_cpu_context *cctx = PERCPU_VARPTR(counters_env);
> + int err = 0, coreno = core_id();
> + counter_t ccno = pa->cores_counters[coreno];
> +
> + spin_lock_irqsave(&counters_lock);
What is the lock protecting? (Both here and in the other perfmon_do_
functions). I imagine we need to disable interrupts, in case a PMI IRQ fires,
but do we need to serialize these accesses across the entire machine?
> + if (perfmon_is_fixed_event(&pa->ev)) {
> + uint64_t fxctrl_value = read_msr(MSR_CORE_PERF_FIXED_CTR_CTRL);
> +
> + if (!(fxctrl_value & (FIXCNTR_MASK << ccno)) ||
> + (ccno >= cpu_caps.fix_counters_x_proc)) {
> + err = -ENOENT;
> + } else {
> + perfmon_init_event(&cctx->fixed_counters[ccno]);
> +
> + perfmon_enable_fix_event((int) ccno, FALSE);
> +
> + write_msr(MSR_CORE_PERF_FIXED_CTR_CTRL,
> + fxctrl_value & ~(FIXCNTR_MASK << ccno));
> + write_msr(MSR_CORE_PERF_FIXED_CTR0 + ccno, 0);
For the (fxctrl_value & (FIXCNTR_MASK << ccno)) checks, should "ccno" be
multipled by FIXCNTR_NBITS? You're trying to select a 4-bit chunk of the
register, right? (btw, I'm looking at the SDM V3 18.2, Fig 18-2 for the layout
of the register).
> +void perfmon_init(void)
> +{
> + int i;
>
> /* Enable user level access to the performance counters */
> lcr4(rcr4() | CR4_PCE);
> +
> + /* This will be called from every core, no need to execute more than once.
> + */
> + if (cpu_caps.perfmon_version == 0)
> + perfmon_read_cpu_caps(&cpu_caps);
Technically this is racy. We might be okay, since they are all writing the same
values.
> + write_mmreg32(LAPIC_LVT_PERFMON, IdtLAPIC_PCINT);
> +}
This will break when we move to the x2APIC. We can deal with it later. =)
> +void perfmon_interrupt(struct hw_trapframe *hw_tf, void *data)
> +{
> + int i;
> + struct perfmon_cpu_context *cctx = PERCPU_VARPTR(counters_env);
> + uint64_t gctrl, status;
> +
> + profiler_add_hw_sample(hw_tf);
So it looks like we generate a trace whenever any of the counters overflow, but
we lose track of which one overflowed. I guess that's what userland perf
expects?
> +static int perfmon_alloc_get(struct perfmon_session *ps, int ped, bool reset,
> + struct perfmon_alloc **ppa)
> +{
> + struct perfmon_alloc *pa;
> +
> + if (unlikely((ped < 0) || (ped >= ARRAY_SIZE(ps->allocs))))
> + return -EBADFD;
What is ped, an FD? If it's just an index that can't be < 0, you could make it
an unsigned type. I see the user of this (devarch) was casting it to an int
anyways.
> +struct perfmon_status *perfmon_get_event_status(struct perfmon_session *ps,
> + int ped)
> +{
> + int err;
> + struct core_set cset;
> + struct perfmon_status_env env;
> +
> + err = perfmon_alloc_get(ps, ped, FALSE, &env.pa);
> + if (unlikely(err))
> + return ERR_PTR(err);
If you can avoid it, please don't use ERR_PTR. It makes it a lot harder to tell
how to use a function by looking at its signature. For instance, if I didn't
look into the C code, I'd think the following is fine:
foo = perfmon_get_event_status(ps, ped);
if (!foo)
handle_the_error.
In general, we opt to set errno if possible (which i think is the case here)
instead of using ERR_PTR. ERR_PTR and IS_ERR is pretty much only for linux
compatibility (and ancient memcpy code from 7 years ago). Throwing with error()
is another option.
If you do have to use it here, which could be the case if we're called from IRQ
context, then please comment the function accordingly.
> + env.pef = perfmon_alloc_status();
> + perfmon_setup_alloc_core_set(env.pa, &cset);
> +
> + smp_do_in_cores(&cset, perfmon_do_cores_status, &env);
Are there any weird cases where multiple threads are using the same pa? (env.pa
here).
Is 'pa' basically read-only once we install_session_alloc in perfmon_open_event?
If not, how do we protect from having multiple users?
If that's true and it's the invariant that keeps us from clobbering pa's state,
then please put that in the comments somewhere. Otherwise someone in the future
will modify this and enter a nightmare.
> +struct perfmon_event {
> + union {
> + struct {
> + uint64_t event:8;
> + uint64_t mask:8;
> + uint64_t usr:1;
> + uint64_t os:1;
> + uint64_t edge:1;
> + uint64_t pc:1;
> + uint64_t inten:1;
> + uint64_t anyth:1;
> + uint64_t en:1;
> + uint64_t invcmsk:1;
> + uint64_t cmask:8;
> + uint64_t res:32;
> + } b;
> + uint64_t v;
> + } u;
> + uint64_t flags;
> + uint64_t trigger_count;
> +};
If you can avoid the bitfields in the future, that'd be nice. We try to avoid
using them where possible, in favor of masks and shifts. If this is hard to
change here, then don't bother.
> From 907da2682383ca581a58352456c30c4dab7cd762 Mon Sep 17 00:00:00 2001
> From: Davide Libenzi <dlib...@google.com>
> Date: Sat, 28 Nov 2015 18:35:57 -0800
> Subject: Created the new devarch perf file using the perfmon infrastructure
>
> --- a/kern/arch/x86/devarch.c
> +++ b/kern/arch/x86/devarch.c
> +static void arch_free_perf_context(struct perf_context *pc)
> +{
> + if (likely(pc)) {
From the usage of this function, it looks like a bug if there is !pc.
> + case PERFMON_CMD_COUNTER_STATUS: {
> + uint32_t ped;
> + struct perfmon_status *pef;
> +
> + error_assert(EBADMSG, (kptr + 1 + sizeof(uint32_t)) <= ktop);
> + kptr = get_le_u32(kptr + 1, &ped);
> +
> + pef = perfmon_get_event_status(pc->ps, (int) ped);
> + if (!IS_ERR(pef)) {
> + int i;
> + uint8_t *rptr;
> + uint64_t *mvalues = kzmalloc(num_cores * sizeof(mvalues),
> + KMALLOC_WAIT);
> +
> + for (i = 0; i < num_cores; i++)
> + mvalues[get_os_coreid(i)] = pef->cores_values[i];
This usage of get_os_coreid is probably wrong. The range of i is the list of
os_coreids already, not hw_coreids. Probably just want mvalues[i].
> + pc->resp_size = 3 * sizeof(uint64_t) + sizeof(uint16_t) +
> + num_cores * sizeof(uint64_t);
> + pc->resp = kmalloc(pc->resp_size, KMALLOC_WAIT);
> +
> + rptr = put_le_u64(pc->resp, pef->ev.u.v);
> + rptr = put_le_u64(rptr, pef->ev.flags);
> + rptr = put_le_u64(rptr, pef->ev.trigger_count);
> + rptr = put_le_u16(rptr, num_cores);
Not that we actually support more than 2^16 cores yet, but num_cores is an int.
> + for (i = 0; i < num_cores; i++)
> + rptr = put_le_u64(rptr, mvalues[i]);
> + kfree(mvalues);
> + perfmon_free_event_status(pef);
> + } else {
> + error(-PTR_ERR(pef), NULL);
Can we just have perfmon_get_event_status() throw? Same goes for
perfmon_open_event() and all of these functions.
> + }
> + default:
> + error(EINVAL, NULL);
We should take advantage of error() and have a message, like
error(EINVAL, "Bad perfmon command 0x%x", cmd);
> Added new perf utility to access CPU counters from userspace.
Like the other tools such as snc and kprof2perf, this can't be built by cd-ing
into the directory and running make. I'd like to fix that before it gets out of
hand, though maybe the cure is worse than the symptoms for now.
> diff --git a/tools/profile/perf/perf.c b/tools/profile/perf/perf.c
> new file mode 100644
> index 000000000000..0b5ef5d950b1
> --- /dev/null
> +++ b/tools/profile/perf/perf.c
> +static struct perf_context_config perf_cfg = {
> + .perf_file = "#arch/perf",
> + .kpctl_file = "/prof/kpctl",
It's safer to use #kprof instead of /prof. We happen to have it bound in our
current ifconfig, but there's no guarantee that'll always be the case. In
general, if you know the device by #name, use it directly.
> +void perfmon_interrupt(struct hw_trapframe *hw_tf, void *data)
> +{
> + int i;
> + struct perfmon_cpu_context *cctx = PERCPU_VARPTR(counters_env);
> + uint64_t gctrl, status;
> +
> + profiler_add_hw_sample(hw_tf);
So it looks like we generate a trace whenever any of the counters overflow, but
we lose track of which one overflowed. I guess that's what userland perf
expects?I need to add an "ID", and figure out how to pass that to userland perf.Will do in follow up CL.
--
You received this message because you are subscribed to the Google Groups "Akaros" group.
To unsubscribe from this group and stop receiving emails from it, send an email to akaros+un...@googlegroups.com.
To post to this group, send email to aka...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Second, the set of tools used to perform the build and more generally the build environment should either be recorded or pre-defined.
Third, users should be given a way to recreate a close enough build environment, perform the build process, and verify that the output matches the original build.
create-build-file:
ifneq ($(INVARIANT_BUILD),1)
@echo "Kernel: $(abspath $(KERNEL_OBJ))" > kern/kfs/etc/build.info
endifKernel: /usr/local/google/home/src/akaros/akaros/obj/kern/akaros-kernel Date: Sun Dec 13 11:03:28 PST 2015 Host: dlibenzi.mtv.corp.google.com
# Samples: 3K of event 'raw 0x840000000000ff80:HG'Then you need to know what 0xff is for mask, and what 0x80 is for event. But Akaros perf tells you that.
> In order to do that though, I will need to know, on the Akaros side,
> the full path within the build directory, of the akaros kernel ELF
> file. A couple of solutions which come to my mind:
>
> 1) Build system create a magic build info file into KFS
> 2) Build info are stuck in a kernel ELF section, retrievable via
> system call or #some-dev-file
just to be clear, you want a way for a program running on Akaros to
know the path to the kernel when it was built,
e.g. /full/path/to/obj/kern/akaros-kernel (or perhaps
akaros-kernel-64b)?
> Hmm, this is git-s doing that.
> I fixed the others.
Not sure about why the pre-commit hooks don't work. They also fail to
checkpatch the commit message (sometimes they catch typos and lack of
signed-off tags). I have a script that I run to checkpatch on a
revision list:
$ cat ~/bin/git-checkpatch
#!/bin/bash
TMPDIR=/tmp/fp-cp
usage()
{
echo "$0 rev..list"
exit -1
}
if [ $# -ne 1 ]
then
usage
fi
mkdir -p $TMPDIR
git format-patch -o $TMPDIR $1
./scripts/checkpatch.pl $TMPDIR/*.patch
rm $TMPDIR/*patch
>
> https://www.google.com/search?q=libpfm4
We've been a little lax with adding code and dealing with licenses in
the past. I'd like it to be clear from the commit message where code
came from. I take it then that you added libpfm4-4.6.0 or something
from http://perfmon2.sourceforge.net/, possibly from
git://git.code.sf.net/p/perfmon2/libpfm4.
> The could be per-CPU locks, but given that this is not an extremely
> hot path, I chose to not do that.
> Turned into per-CPU locks.
That cuts down on the lock contention, but I'm still curious what the
lock is protecting. If you're just trying to protect from an
interrupt causing someone to muck with the state, then a lock isn't
needed.
Still, I'm fine with the lock (per cpu or global). I (and
perhaps whoever reads the code in the future) just want to know if
there's something to be worried about.
OK. So they are immutable after all of the cores run
perfmon_do_cores_alloc() (due to this):
pa->cores_counters[core_id()] = (counter_t) i;
which is before perfmon_install_session_alloc().
That's the sort of thing that could catch someone in the future, even
if it is unlikely. I wasn't asking for the code to change; I was just
saying that a comment would have made it more clear both for me and for
future developers. We ought to be explicit about any of the invariants
about locking or concurrent access.
> > If you can avoid the bitfields in the future, that'd be nice. We
> > try to avoid
> > using them where possible, in favor of masks and shifts. If this
> > is hard to
> > change here, then don't bother.
> >
>
> It makes code much cleaner IMO.
> You had to create macros (or open code them) for getting/setting every
> one of them.
> Which is pretty tedious when you have that many.
I agree that they look nice. My concern with bitfields is that the
compiler might not do what we want with them. I'm not a language
expert here, but here's a sample of some of the issues:
http://stackoverflow.com/questions/6043483/why-bit-endianness-is-an-issue-in-bitfields/6044223#6044223
Again, if this is hard to change here, then don't bother. It's not a
huge deal.
> > Added new perf utility to access CPU counters from userspace.
> >
> > Like the other tools such as snc and kprof2perf, this can't be
> > built by cd-ing
> > into the directory and running make. I'd like to fix that before
> > it gets out of
> > hand, though maybe the cure is worse than the symptoms for now.
> >
>
> I asked the question a few times, about the revised build system. Got
> no answers.
I guess coming up with the right answer for a build system is a little
more involved. There are ways in which we can use the current system
and still have the app Makefiles know about the CC, but I'll look into
it on my own.
What's an MMP record?
> Date: Fri, 27 Nov 2015 10:57:19 -0800
> Subject: Migrated devarch MSR access to new MSR API
> @@ -362,22 +363,28 @@ static long archread(struct chan *c, void *a, long n, int64_t offset)
> + msr_set_values(&msrv, values, num_cores);
> +
> + err = msr_cores_read(&cset, &msra, &msrv);
> +
> + n = -1;
> + if (likely(!err)) {
> + if (n >= num_cores * sizeof(uint64_t)) {
> + if (!memcpy_to_user_errno(current, a, values,
> + num_cores * sizeof(uint64_t)))
> + n = num_cores * sizeof(uint64_t);
> } else {
This didn't seem to get fixed or clarified. Here's my original comment:
What is going on here? n == -1, then we see if it is great enough to cover the
whole range of our output (which is always positive)?
Oh, I found it later in commit f326ef1adcc. Please apply the
fixups to the originally incorrect patch. Other than
preserving a working git history, it saves your reviewers time.
> From 99ac80e47013f91452d8e5c439ba0937a48576c0 Mon Sep 17 00:00:00 2001
> From: Davide Libenzi <dlib...@google.com>
> Date: Mon, 14 Dec 2015 13:45:25 -0800
> Subject: Added ROS_STATIC_ASSERT to common.h and using that in assert.h
>
> Added ROS_STATIC_ASSERT to common.h and using that in assert.h.
We don't need this in ros/common.h, since userspace already has their
own. It's parlib_static_assert(), added by Dan in 4402cbebfec6
("Rename static_assert to avoid conflicting with C++'11 (and later)
(XCC)")
> +static uint64_t perfmon_get_fixevent_mask(const struct perfmon_event *pev,
> + int eventno, uint64_t base)
> +{
> + uint64_t m = 0;
> +
> + if (pev->u.b.inten)
> + m |= 1 << 3;
> + if (pev->u.b.os)
> + m |= (1 << 0);
> + if (pev->u.b.usr)
> + m |= (1 << 1);
This shouldn't compile on this commit. You change perfmon_event to use the
bitfield macros, but didn't change this. I guess you change it in a
later commit, since the final branch builds. There are other places in
this file that need the same fixup.
> @@ -55,13 +56,13 @@ static void perfmon_read_cpu_caps(struct perfmon_cpu_caps *pcc)
>
> cpuid(0x0a, 0, &a, &b, &c, &d);
>
> - ZERO_DATA(*pcc);
> - pcc->perfmon_version = a & 0xff;
> pcc->proc_arch_events = a >> 24;
> pcc->bits_x_counter = (a >> 16) & 0xff;
> pcc->counters_x_proc = (a >> 8) & 0xff;
> pcc->bits_x_fix_counter = (d >> 5) & 0xff;
> pcc->fix_counters_x_proc = d & 0x1f;
> + wmb_f();
> + pcc->perfmon_version = a & 0xff;
> }
What is going on here? Is this an attempt to make it race-free? If
anything this should be a fixup for the original commit, but I'm not
even sure what it's doing.
> static void perfmon_enable_event(int event, bool enable)
> @@ -96,11 +97,11 @@ static uint64_t perfmon_get_fixevent_mask(const struct perfmon_event *pev,
> {
> uint64_t m = 0;
>
> - if (pev->u.b.inten)
> + if (PMEV_GET_EN(pev->event))
> m |= 1 << 3;
> - if (pev->u.b.os)
> + if (PMEV_GET_OS(pev->event))
> m |= (1 << 0);
> - if (pev->u.b.usr)
> + if (PMEV_GET_USR(pev->event))
> m |= (1 << 1);
> m <<= eventno * FIXCNTR_NBITS;
Ah, these (and others) should fixup commit #17 that changed the
bitfields.
> #include <sys/types.h>
>
> +#define PROF_DOM_SHIFT (8 * sizeof(uint64_t) - 4)
> +#define PROF_INFO_MASK (((uint64_t) 1 << PROF_DOM_SHIFT) - 1)
This is a little confusing. The DOM shift is just 60, right? Is that
value special (hence the 8 * sizeof - 4)?
> struct proftype_kern_trace64 {
> + uint64_t info;
> uint64_t tstamp;
> uint16_t cpu;
> uint16_t num_traces;
Subject for a future patch: are these structures copied in native
endianness, or in an endian-independent format?
> @@ -490,7 +511,7 @@ void profiler_notify_mmap(struct proc *p, uintptr_t addr, size_t size, int prot,
> int flags, struct file *f, size_t offset)
> {
> if (kref_get_not_zero(&profiler_kref, 1)) {
> - if (f && (prot & PROT_EXEC) && profiler_percpu_ctx && tracing) {
> + if (f && (prot & PROT_EXEC) && profiler_percpu_ctx) {
> char path_buf[PROFILER_MAX_PRG_PATH];
> char *path = file_abs_path(f, path_buf, sizeof(path_buf));
>
> @@ -504,7 +525,7 @@ void profiler_notify_mmap(struct proc *p, uintptr_t addr, size_t size, int prot,
> void profiler_notify_new_process(struct proc *p)
> {
> if (kref_get_not_zero(&profiler_kref, 1)) {
> - if (profiler_percpu_ctx && tracing && p->binary_path)
> + if (profiler_percpu_ctx && p->binary_path)
> profiler_push_new_process(p);
> kref_put(&profiler_kref);
> }
So now we're emitting trace info on all mmaps and process creation, even
if the profiler is turned off?
> From d2e4bae5a18d21205bf5de243450794a91674386 Mon Sep 17 00:00:00 2001
> From: Davide Libenzi <dlib...@google.com>
> Date: Sun, 13 Dec 2015 13:58:20 -0800
> Subject: Emit build information file into KFS /etc/build.info
> +KERNEL_ELF_PATH=$(abspath $(KERNEL_OBJ))-64b
This -64b is arch specific. We'll want to get this variable from the
arch/Makefile.
> +create-build-file:
> +ifneq ($(INVARIANT_BUILD),1)
Can drop the INVARIANT_BUILD, esp since we don't have a CONFIG_ var for
it.
> + @echo "KernelPath: $(KERNEL_ELF_PATH)" > kern/kfs/etc/build.info
> + @echo "KernelSize: $(shell stat -c %s $(KERNEL_ELF_PATH))" >> \
> + kern/kfs/etc/build.info
> + @echo "Date: `date`" >> kern/kfs/etc/build.info
> + @echo "Host: `hostname`" >> kern/kfs/etc/build.info
Use $(FIRST_KFS_PATH) instead of kern/kfs for all of these. (or rather
use $(FIRST_KFS_PATH) to create KFS_BUILD_INFO_PATH or something).
If you're removing this in an upcoming patchset (the #version stuff),
then don't worry about any of this stuff.
> How?
KERNEL_ELF_PATH can be defined in each arch's Makefile, which gets
included into the top-level Makefile. The existence of the -64b
version of the file is an x86 thing. At least for now. (It's created
by this:
# Need to change the format to 32 bit, to trick multiboot/grub1 into loading
ARCH_POST_LINK_CMD = cp $@ $@-64b; $(OBJCOPY) -I elf64-x86-64 -O elf32-i386 $@
Barret