Skip to content Skip to sidebar Skip to footer

Is It Possible To Use Dictvectorizer On Chunked Data?

I am trying to import chunked data using python pandas csv reader,to overcome memory error, and use DicVectorizer to transform string to float dtypes. But I could see two different

Solution 1:

In Pandas 0.19, you can declare columns as Categorial in read_csv. See documentaion.

So as an example for the doc, you can type a column named col1 in your csv like this and reduce memory footprint:

pd.read_csv(StringIO(data), dtype={'col1': 'category'})

Post a Comment for "Is It Possible To Use Dictvectorizer On Chunked Data?"