Link to home
Start Free TrialLog in
Avatar of tooki
tooki

asked on

UNIX cut -d command (Urgent please)

I have a text file that has one line and each field of the like are separated by two consecutive tabs. How can I extract each field?
I tried: $cat myfile.txt | cut -d"\t\t" -f0
this does not work...
SOLUTION
Avatar of tfewster
tfewster
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Tintin
Tintin

The delimiter for the cut command must be a single character.

BTW, there is no field0 with cut, it starts at 1.

Certainly avizit's suggestion will work, but it would be helpful to see an actual example of your data, to see if there is a better way of extracting the information you need.
AIX, HP-UX & Linux don't allow multi-byte delimiters in `cut`; Solaris does, but I'm not sure if Solaris `cut` reads \t as a Tab char

Solaris cut (and probably all others) default delimiter is a tab.

Note that a multi-byte delimiter does not allow you to specify 2 or more characters as a delimiter.
If you don't use any single tabs in your fields, just count {1,3,5,7,9} instead of {1,2,3,4,5} (note that the tirst field is -f1, not -f0)

Otherwise, a sed preprocessor (or similar) is a must...
You can also try making this with awk. For example if you want the second field instead of cut, use awk '{print $2}' .
sed is "awk for dummies"... A lot easier to learn and can to all the "normal" awk stuff, but missing the fancy features. I used a grep/cut/sed combination for years before I took the time to learn awk - and still prefer it...

Using awk works just as well, but does not provide anything extra for this case (except more typing ;o).