class Model(Base):
"""
Acquire from the official tensorflow_datasets model zoo, or the ophthalmology focussed ml-prepare library
:cvar dataset_name: name of dataset. Defaults to mnist
:cvar tfds_dir: directory to look for models in. Defaults to ~/tensorflow_datasets
:cvar K: backend engine, e.g., `np` or `tf`. Defaults to np
:cvar as_numpy: Convert to numpy ndarrays
:cvar data_loader_kwargs: pass this as arguments to data_loader function
"""
__tablename__ = 'model'
dataset_name = Column(String, primary_key=True, default='mnist',
comment='name of dataset', doc='name of dataset')
tfds_dir = Column(String, default='~/tensorflow_datasets',
comment='directory to look for models in', doc='directory to look for models in')
K = Column(String, default='np',
comment='backend engine, e.g., `np` or `tf`', doc='backend engine, e.g., `np` or `tf`')
as_numpy = Column(Boolean,
comment='Convert to numpy ndarrays', doc='Convert to numpy ndarrays')
data_loader_kwargs = Column('data_loader_kwargs', JSON,
comment='pass this as arguments to data_loader function',
doc='pass this as arguments to data_loader function')
# _return_type = 'Train and tests dataset splits. Defaults to (np.empty(0), np.empty(0))'
def __repr__(self):
"""
:returns: String representation of constructed object
:rtype: ```str```
"""
return '<Model(dataset_name={self[dataset_name]!r},' \
' tfds_dir={self[tfds_dir]!r},' \
' K={self[K]!r},' \
' as_numpy={self[as_numpy]!r},' \
' data_loader_kwargs={self[data_loader_kwargs]!r}' \
')>'.format(self=self)
--
SQLAlchemy -
The Python SQL Toolkit and Object Relational Mapper
http://www.sqlalchemy.org/
To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description.
---
You received this message because you are subscribed to a topic in the Google Groups "sqlalchemy" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sqlalchemy/xZAh5zPswM0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sqlalchemy+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/6576d789-d088-4e68-a7f7-a17b5c96a810o%40googlegroups.com.
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+...@googlegroups.com.To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAGOFhkTFRjQpTiNwM%2BMSX3dw95KGUhX-ATCpNbb_YRhZRM%2B5Rw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/ee539bc2-67a9-4831-99e9-5e65bc527d84%40www.fastmail.com.
Dear Mike,My tool works at the AST level, and converts between:
- different docstring formats;
- having types in docstring or explicitly annotated;
- argparse parser augmenting function, class [plain old python class], methods/functions
The next step is to add support for SQLalchemy models, routes, and tests.As you saw from my example code above, the duplication in SQLalchemy is intense.Columns can be documented in the docstring, and/or on a column itself with `comment` and/or `doc`.So if I'm going to generate these SQLalchemy models, and generate classes &etc. from these SQLalchemy models, then I'll need a clean, consistent way of documenting each model.What is that way?
To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAGOFhkSXTvSFscgTVMqGx6oVGrwL3WtvhDebGXEAMqA1rBt9kQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/ec302847-868a-40bd-83ea-34a27dfcda0e%40www.fastmail.com.
Again, my goal isn't related to Sphinx (although generating nice documentation is of course, a nice-to-have).One advantage of having the generated SQL code be commented is that I could write parsers that go from SQL files to SQL alchemy models, complete with docs.
The issue with having loose strings in the middle of a class like you've done is that there is no built-in semantics, and it'll break all existing linters.
Sure, I could extend the linters and traverse the body of the class, inferring out the semantics. But that would be incredibly non-standard. I'm trying to generate code that could be considered the standard.
So comment, doc, or an ivar/cvar [Sphinx treats these as the same: https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html#info-field-lists] is what I'll generate to/from. I can generate them all, but that would be hard for a human to maintain. The idea with the generated code is that it needs to be human maintainable, as well as machine maintainable.
To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAGOFhkTUnyP%2Br9CjECu7cozR4mWb5oJfEMJsR0dkygKcN-bmpQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/925cbdca-2fa0-443d-811e-ba59e1c231ca%40www.fastmail.com.
Good clarification. Didn't know about those PEPs.So you're thinking that for runtime discoverability to use `doc` first, and fallback to `comment`?My idea with this project is to translate between:↔ SQL↔ Python (public served API, models, tests)↔ Rust (public served API, models, tests)↔ TypeScript (web)↔ Swift (iOS)↔ Java/Kotlin (Android)So naturally every layer needs to have sufficient information to recreate the semantics for every other layer. SQL has the strongest types, for the rest a small new syntax will need to be created, e.g., to specify that this property is a PK of VARCHAR(20). With the exception of this however, everything else should be standard, and it should be easy for any developer to jump in and develop following best-practices in their chosen language(s) and framework(s).
To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/CAGOFhkRebudw%3DFGKZ5DLJd6Nyjc_EyGr14sCgG%3DyrZbqLvQ4kA%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sqlalchemy/d1acbb3c-901a-4123-adf4-dd50e1c8c5a4%40www.fastmail.com.