-
-
Notifications
You must be signed in to change notification settings - Fork 18.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DataFrame.to_csv throws exception #290
Comments
wesm
added a commit
that referenced
this issue
Oct 26, 2011
Bah, what a bummer. Just put out the release yesterday. Fixed in the above commit |
Thanks for the quick fix. It is easy to work around by passing an index_label to to_csv, so no big deal. thanks again. |
yarikoptic
added a commit
to neurodebian/pandas
that referenced
this issue
Nov 2, 2011
* commit 'v0.5.0-7-gcf32be2': (161 commits) ENH: add melt function, speed up DataFrame.apply DOC: release notes re: GH pandas-dev#304 BUG: clear Series caches on consolidation, address GH pandas-dev#304 DOC: fix exceptions in docs ENH: cython count_level function, cleanup and tests DOC: update release note BUG: fix DataFrame.to_csv bug described in GH pandas-dev#290 RLS: Version 0.5.0 BLD: another 2to3 fix BLD: docstring fixes to suppress 2to3 warnings BUG: handle negative indices extending before beginning of Series TST: fix test case broken by last change BUG: don't be too aggressive with int conversion parsing MultiIndex, GH pandas-dev#285 BUG: missed one BUG: workaround not being able to use cast=True with boolean dtype in Python 2.5 TST: tuples and strings aren't comparable in python 3 TST: more 32-bit integer fussiness ENH: -> int64 everywhere TST: int64 fixes TST: 32-bit use 64-bit integer ...
dan-nadler
pushed a commit
to dan-nadler/pandas
that referenced
this issue
Sep 23, 2019
columns. By using the index bitfield masks we can return a sparse dataframe. This is a behaviour change, as we don't return rows for timestamps where the field wasn't updated. Old code: ========= # All columns %timeit l.read('3284.JP', date_range=adu.DateRange(20170101, 20170206)) 1 loops, best of 3: 1.99 s per loop # Multiple columns %timeit l.read('3284.JP', date_range=adu.DateRange(20170101, 20170206), columns=['DISC_BID1', 'BID']) 10 loops, best of 3: 82.2 ms per loop # Single very sparse column %timeit l.read('3284.JP', date_range=adu.DateRange(20170101, 20170206), columns=['DISC_BID1']) 10 loops, best of 3: 76.4 ms per loop New code: ========= # All columns %timeit l.read('3284.JP', date_range=adu.DateRange(20170101, 20170206)) 1 loop, best of 3: 2.29 s per loop # Multiple columns %timeit l.read('3284.JP', date_range=adu.DateRange(20170101, 20170206), columns=['DISC_BID1', 'BID']) 10 loops, best of 3: 75.4 ms per loop # Single very sparse column %timeit l.read('3284.JP', date_range=adu.DateRange(20170101, 20170206), columns=['DISC_BID1']) 10 loops, best of 3: 47.4 ms per loop Fixes pandas-dev#290
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, i ran into a problem with the current master on github. Here's
import cStringIO as StringIO
import pandas
f1 = StringIO.StringIO('a,1.0\nb,2.0')
df = pandas.DataFrame.from_csv(f1,header=None)
newdf = pandas.DataFrame({'t': df[df.columns[0]]})
newdf.to_csv('/tmp/test.csv')
The last line gives me an exception like this:
Traceback (most recent call last):
File "/tmp/test.py", line 6, in
newdf.to_csv('/tmp/test.csv')
File "/usr/lib/python2.7/dist-packages/pandas/core/frame.py", line 531, in to_csv
csvout.writerow(list(index_label) + list(cols))
TypeError: 'NoneType' object is not iterable
Code like this was working until quite recently. What i'm really trying to do is to read a bunch of csv files, each of which contains two columns of data, the first being the index, the second some values. The index is the same across files, so
i'm trying to combine these files into one DataFrame.
thanks
dieter
The text was updated successfully, but these errors were encountered: