To make the awk script uniform, you should convert the single line XML file s1.txt into one matching s.txt.
I.e. create a perl script that will go and reformat the xml file that is being fed into a common format. I.e. you define which types of entries must be on a line by them selves, which entries have the open value close. etc.
Are there multiple processes that generate these XML files.
IMHO it is easier to make the input file have uniform layout versus trying to come up with a script that will match any variation.
Hatrix76
May a propose a different approach than awk?
xpath will deliver the first element after Transaction (with all it's childs), so cut out the first shown element and you have the name of the element directly following Transaction, and it does not matter how the XML is formatted, or how often Transaction is in the HTML:
xpath file.xml "//Transaction/*[1]" 2>/dev/null | sed -e 's/^<\([^>]*\)>.*/\1/g'
explanation:
"//Transaction/*[1]" <- Xpath query to select the first element after all Transaction elements
2>/dev/null <- Xpath has some additional output on stderr, eliminate it
sed -e 's/^<\([^>]*\)>.*/\1/g' <- cut aut the first element to display it
xpath should be available or easily installable on any unix
best
Ray
vignesh_prabhu
ASKER
arnold - Yes, the XML needs to be formatted so the script works. Unfortunately the XML are always in the s1.txt format. To be on the safer side, I have slightly modified woolmilkporc code as below. This works for both s.txt and s1.txt.
awk '
BEGIN {
FS="[<>]"
}
{
for (i=1;i<=NF;i++) {
if ($i != "") {
if ($i == "Transaction") {
j=1
continue
}
if (j==1) {
print $i
exit
}
}
}
}'
Thanks woolmilkporc.
hatrix76 - Thanks for the suggestion. Unfortunately I do not have xpath installed in our servers. Requested the admin to do so. Until then I have to go with the awk variant.