I have a table with lots of columns (30) and I need to pull all columns and convert the selected rows to JSON and send it to my server (and I want the column names in the JSON so the server can easily figure out which value is which). I first tried this:
results = session.query(RecoveryLogEntries).limit(record_count)
I did this thinking it would give me everything and I could pull the column names and build my JSON, but that didn't seem to work because "results.column_descriptions" isn't available for some reason.
Then I tried this (provided every column):
results = session.query(RecoveryLogEntries.id, RecoveryLogEntries.recovery_instance_uuid, etc).limit(record_count)
After that I did the following:
columns = [desc['name'] for desc in results.column_descriptions]
recovery_data_list = []
for row in results:
row_dict = {}
for column_name in columns:
print('column name:', column_name)
row_dict[column_name] = getattr(row, column_name)
recovery_data_list.append(row_dict)
If I name each column the above code works, if I don't, I've learned it doesn't (only took me like 5 hours to figure that out!!).
So, my question: is it generally better practice to name every column that you want to pull, even if it's a long list? Also, why does using just RecoveryLogEntries instead of naming each column yield a different result? It seems weird because in the SQL world, I could do a "SELECT *" or "SELECT id, ..." and the output is still in the same format regardless of whether I explicitly name name each column or use * to select all columns. It just seems like it's a whole bunch of typing which could be error-prone. I'll do it if I need to, but what I'm really asking is what is the most appropriate/accepted/standard way to do this.
btw, SQLAlchemy rocks! It's a headfull, but I'm digging it!