i hate to ask this... since i know it, but my memory... :(

Place to discuss Fedora and/or Red Hat

i hate to ask this... since i know it, but my memory... :(

Postby Basher52 » Fri Oct 29, 2004 3:25 pm

i know i used the command for getting unique rows on a file, but know i jusat cant find the command for it. :(

i thought it had something to do with fgrep, but nope :(

i have a file, that has multiple rows of the same data, but i wanna nuke the duplicates... how do i do that ?
sorry for beeing a pain... :P

B52
User avatar
Basher52
guru
guru
 
Posts: 881
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE

Postby Void Main » Fri Oct 29, 2004 5:23 pm

"sort -u" is probably what you are looking for. You can also use the "uniq" command which has more capability but the file still must be sorted. Usually you would do something like:

$ cat file.txt | sort -u

or

$ sort -u file.txt

or

$ sort file.txt | uniq
User avatar
Void Main
Site Admin
Site Admin
 
Posts: 5705
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA

Postby Basher52 » Mon Nov 01, 2004 5:07 am

yep..there we have it :D
sort and uniq 8)
User avatar
Basher52
guru
guru
 
Posts: 881
Joined: Wed Oct 22, 2003 5:57 am
Location: .SE


Return to Fedora/Red Hat

Who is online

Users browsing this forum: No registered users and 1 guest

cron