Getting rid of n when using .readlines() [duplicate]

Posted on

Question :

Getting rid of n when using .readlines() [duplicate]

I have a .txt file with values in it.

The values are listed like so:

Value1
Value2
Value3
Value4

My goal is to put the values in a list. When I do so, the list looks like this:

['Value1n', 'Value2n', ...]

The n is not needed.

Here is my code:

t = open('filename.txt', 'r+w')
contents = t.readline()

alist = []

for i in contents:
    alist.append(i)
Asked By: TDNS

||

Answer #1:

This should do what you want (file contents in a list, by line, without n)

with open(filename) as f:
    mylist = f.read().splitlines() 
Answered By: user3131651

Answer #2:

I’d do this:

alist = [line.rstrip() for line in open('filename.txt')]

or:

with open('filename.txt') as f:
    alist = [line.rstrip() for line in f]
Answered By: hughdbrown

Answer #3:

You can use .rstrip('n') to only remove newlines from the end of the string:

for i in contents:
    alist.append(i.rstrip('n'))

This leaves all other whitespace intact. If you don’t care about whitespace at the start and end of your lines, then the big heavy hammer is called .strip().

However, since you are reading from a file and are pulling everything into memory anyway, better to use the str.splitlines() method; this splits one string on line separators and returns a list of lines without those separators; use this on the file.read() result and don’t use file.readlines() at all:

alist = t.read().splitlines()
Answered By: Martijn Pieters

Answer #4:

After opening the file, list comprehension can do this in one line:

fh=open('filename')
newlist = [line.rstrip() for line in fh.readlines()]
fh.close()

Just remember to close your file afterwards.

Answered By: Lisle

Answer #5:

I used the strip function to get rid of newline character as split lines was throwing memory errors on 4 gb File.

Sample Code:

with open('C:\aapl.csv','r') as apple:
    for apps in apple.readlines():
        print(apps.strip())
Answered By: Yogamurthy

Answer #6:

for each string in your list, use .strip() which removes whitespace from the beginning or end of the string:

for i in contents:
    alist.append(i.strip())

But depending on your use case, you might be better off using something like numpy.loadtxt or even numpy.genfromtxt if you need a nice array of the data you’re reading from the file.

Answered By: askewchan

Answer #7:

from string import rstrip

with open('bvc.txt') as f:
    alist = map(rstrip, f)

Nota Bene: rstrip() removes the whitespaces, that is to say : f , n , r , t , v , x and blank ,
but I suppose you’re only interested to keep the significant characters in the lines. Then, mere map(strip, f) will fit better, removing the heading whitespaces too.


If you really want to eliminate only the NL n and RF r symbols, do:

with open('bvc.txt') as f:
    alist = f.read().splitlines()

splitlines() without argument passed doesn’t keep the NL and RF symbols (Windows records the files with NLRF at the end of lines, at least on my machine) but keeps the other whitespaces, notably the blanks and tabs.

.

with open('bvc.txt') as f:
    alist = f.read().splitlines(True)

has the same effect as

with open('bvc.txt') as f:
    alist = f.readlines()

that is to say the NL and RF are kept

Answered By: eyquem

Answer #8:

I had the same problem and i found the following solution to be very efficient. I hope that it will help you or everyone else who wants to do the same thing.

First of all, i would start with a “with” statement as it ensures the proper open/close of the file.

It should look something like this:

with open("filename.txt", "r+") as f:
    contents = [x.strip() for x in f.readlines()]

If you want to convert those strings (every item in the contents list is a string) in integer or float you can do the following:

contents = [float(contents[i]) for i in range(len(contents))]

Use int instead of float if you want to convert to integer.

It’s my first answer in SO, so sorry if it’s not in the proper formatting.

Answered By: geo1230

Leave a Reply

Your email address will not be published.