I know I can do something like this (see below) and this will put my
values into
variables, which I can use in the shell. But it also involves invoking
awk 3 times,
which is wasteful.
cpu_hh=$(echo $cpu_time | $AWK -F':' ' { print $1 } ')
cpu_mm=$(echo $cpu_time | $AWK -F':' ' { print $2 } ')
cpu_sec=$(echo $cpu_time | $AWK -F':' ' { print $3 } ')
can somebody show me an example of how I can pass a variable into awk
(in this case I am assuming I have to initialize it as an array set -A
array before
I pass it into awk?), use the split command, and have the values
available in my
variable I passed in Ie arrary[1], array[2], array[3] so I can
access the variable
outside of AWK but within my script.
Thanks to all who answer this post
You don't need awk at all. With bash, you can do
IFS=\: read -r cpu_hh cpu_mm cpu_sec <<<"$cpu_time"
Are you ask for something like...
cpu_time=23:12:56
x=$( echo $cpu_time | awk -F: '{print "cpu_hh="$1" cpu_mm="$2" cpu_sec="$3}' )
eval $x
echo $cpu_hh
echo $cpu_mm
echo $cpu_sec
Or maybe a shell only version... (in ksh, zsh, but not in bash)
echo $cpu_time | IFS=: read cpu_hh cpu_mm cpu_sec
echo $cpu_hh
echo $cpu_mm
echo $cpu_sec
The same in bash...
echo $cpu_time | { IFS=: read cpu_hh cpu_mm cpu_sec
echo $cpu_hh
echo $cpu_mm
echo $cpu_sec
}
Janis
You can do that also in other shells, like ksh and zsh, BTW.
(But I think <<< is non standard.)
Janis
I think you are asking the wrong question. Why must you use awk? You
can get the result directly into shell variables in lots of ways that
don't use awk. For example
IFS=':' read cpu_hh cpu_mm cpu_ss
and provide the time as an input. You could also, in bash, use an
array:
cpu=($(tr ':' ' ' <<<$cpu_time))
to get ${cpu[0]} etc.
If you don't want to run even tr, use a function:
function split
{
IFS=':'
set "$1"
echo "$1" "$2" "$3"
}
cpu=($(split "$cpu_time"))
Variations on this scheme could provide you with the components one at a
time.
In short, there are lots of ways that are probably better than using
awk. Which is best probably depends on where the data is coming from
and where it will eventually go. In some cases, we might go full circle
and decide that awk is the best option if the data comes from a
multi-line file with lot of times in it.
--
Ben.
> > You don't need awk at all. With bash, you can do
> >
> > IFS=\: read -r cpu_hh cpu_mm cpu_sec <<<"$cpu_time"
> >
>
> You can do that also in other shells, like ksh and zsh, BTW.
> (But I think <<< is non standard.)
It's not, that's what I say "in bash", because it's the only one I know
that does it. But yes, obviously it is probably supported by other shells.
And with shells that don't run the last command in a subshell, one can even
do (standard)
echo "$cpu_time" | read ...
Several good answers have been supplied. Another alternative is (if
this is just a time value obtained from the "date" command) is to
alter the usage of date so that you don't have to strip out the colon
or alter the field separator. This works in Korn shell:
# echo "$(date +"%H %M %S")" | read cpu_hh cpu_mm cpu_sec
Quite some modern shell features in bash have been borrowed from other
shells; so it's always worth to check that if one makes a statement
about a presumed bash'ism. Though, bash has also a few proprietary
constructs re-invented unnecessarily, that other shells already had
available in a different way.
> And with shells that don't run the last command in a subshell, one can even
> do (standard)
>
> echo "$cpu_time" | read ...
>
Yes, that's what I proposed, and you can do it even with shells that
run the last pipe copmmand in a subshell if you use the command
grouping construct that I posted as variant.
Janis
Since you can simply define the IFS for the read command there's no
need to require the restriction of white space field separation in
the input data. (See pk's, my own, and Ben's posting upthread.)
The bigger problem is the subshell incoherence acroll various shells
WRT the last pipe command; but there's as well a simple construct to
address that.
Janis
s/acroll/across/
No idea why you are using echo here, a straight forward
date +"%H %M %S" | read cpu_hh cpu_mm cpu_sec
works fine and should be faster. (Note other comments about bash not
running the final process in a pipeline in the current shell....)
However
eval $(date +"cpu_hh=%H; cpu_mm=%M; cpu_sec=%S" )
works fine in all modern "sh" shells.
Others have shown how to do this without awk. But it often IS useful to
pass variables into awk, I thought I'd answer that question. You do it
using the -v option, e.g.
awk -v cpu_time=$cpu_time 'BEGIN {split(cpu_time, array, ":"); print
array[1], array[2], array[3]}'
--
Barry Margolin, bar...@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
You CANNOT, see below.
> (in this case I am assuming I have to initialize it as an array set -A
> array before
> I pass it into awk?), use the split command, and have the values
> available in my
> variable I passed in Ie arrary[1], array[2], array[3] so I can
> access the variable
> outside of AWK but within my script.
Others have answered your question, but the above makes me think you may have a
fundamental misunderstanding about the relationship between awk and shell. awk
is not shell. You can no more pass a shell array to awk than you can pass a
shell array to a C program. Ditto for any variable. You wouldn't expect to pass
a shell variable to a C program and after the C program runs the shell variable
to have magically changed value, and neither should you expect an awk program to
be able to alter the value of a shell variable.
You can pass the VALUE of a shell variable (i.e. not the variable, but it's
contents) to a C program and you can have that C program print something in it's
output and have the shell assign that original variable to the value of that
printed output, e.g.:
sh_var=$(c_prog "$sh_var")
and so can you with awk:
sh_var=$(awk 'awk_script' "$sh_var")
but in awk there's a shorthand that lets you specify the name of an awk variable
to initialize with that shell variables value on the command line:
sh_var=$(awk -v awk_var="$sh_var" 'awk_script')
rather than just passing in a value and having awk_script figure out which
variable to initialize to that value.
So, if you want to write an awk script that takes the value of some shell
variable and print it's value divided by 2, that'd be:
sh_var=10
awk -v awk_var="$sh_var" 'BEGIN{ print awk_var / 2; exit }'
and if you want to set the shell variable to that result, then it's:
sh_var=10
sh_var=$(awk -v awk_var="$sh_var" 'BEGIN{ print awk_var / 2; exit }')
but be clear - you are passing in the VALUE of a shell variable to awk and
printing the result of the awk script and then setting the shell variable to
awks output, you are NOT passing in and manipulating a shell variable.
Ed.
P.S. Yes, I know you can write a script that jumps back and forth between awk
and shell using combinations of multiple embedded single and double quotes so it
looks like your awk script is directly accessing your shell variables. Just
don't - it's hard to read and error prone compared to doing it as above.
Ben thanks but your examples dont apper to work in ksh, which is the
shell I am using.
Ie
function split
{
IFS=':'
set "$1"
echo "$1" "$2" "$3"
}
cpu=($(split "$cpu_time"))
./split.ksht[42]: 0403-057 Syntax error at line 42 : `(' is not
expected.
As for this
cpu=($(tr ':' ' ' <<<$cpu_time))
note sure what <<< means as oppossed to < which I know what that means
so I am staying away
from this example. Also it does not work with ksh
./split.ksht[35]: 0403-057 Syntax error at line 35 : `(' is not
expected.
I suppose you are probably using the PDksh, or maybe some very old ksh88?
Instead, use the one from AT&T (www.kornshell.com). The proposal works well
there.
But for your task I'd suggest to resort to Icarus' last suggestion which is
most simple and portable.
Janis
> On Jul 7, 1:57 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> [...] You could also, in bash, use an
>> array:
>>
>> cpu=($(tr ':' ' ' <<<$cpu_time))
>>
>> to get ${cpu[0]} etc.
>>
>> If you don't want to run even tr, use a function:
>>
>> function split
>> {
>> IFS=':'
>> set "$1"
>> echo "$1" "$2" "$3"
>> }
>>
>> cpu=($(split "$cpu_time"))
<snip>
> Ben thanks but your examples dont apper to work in ksh, which is the
> shell I am using.
No, indeed. That's why I said "in bash"!
I think this is portable across shells:
function set_cpu_time
{
cpu_hh="$1"
cpu_mm="$2"
cpu_ss="$3"
}
and then call:
set_cpu_time $(split "$cpu_time")
<snip>
--
Ben.
Icarus' solution assumed the time came from a data command whose format
could be altered to get the desired effect. That may fit the bill here
but it is not a solution to the question as asked.
<snip>
--
Ben.
You're right. Here's one using eval without assuming a date command
eval $( printf "cpu_hh=%s cpu_mm=%s cpu_sec=%s" ${cpu_time//:/ } )
though non-POSIX because of the variable substitution it works only
with modern shells.
Janis
>
> <snip>
What would be considered a modern shell, bash? This, does not work
with ksh. I already found
the solution, but I am very interested in other methods to skin the
perverbial cat.
cpu_time=23:12:56
eval $(printf "cpu_hh=%s cpu_mm=%s cpu_sec=%s" ${cpu_time//:/})
${cpu_time//:/}: 0403-011 The specified substitution is not valid for
this command.
It doesn't with a ksh88, it does with a ksh93, and bash, and zsh.
ksh88 is indeed not a "modern" shell; it's very old (though still
existing on commercial Unixes).
For old shells like ksh88, or bourne shell, or standard shells use
the upthread posted approach with IFS=: read which will certainly
work on those non-"modern" shells.
Janis
If you try it on a shell that supports substitution, include the space
that is supposed to replace the colons.
${cpu_time//:/ }
^
jl