Standard way of sharing architectures?

11 views
Skip to first unread message

Samuel Marks

unread,
Dec 7, 2021, 2:46:23 PM12/7/21
to SIG Addons
This is if from an RFC I posted on https://github.com/keras-team/keras/issues/15762 - up-vote and comment in the thread if you want =)

---

There are a huge number of new statistical, machine-learning and artificial intelligence solutions being released every month.

Most are open-source and written in a popular Python framework like TensorFlow, JAX, or PyTorch.

In order to 'guarantee' you are using the best [for given metric(s)] solution for your dataset, some way of automatically adding these new statistical, machine-learning and artificial intelligence solutions to your automated pipeline needs to be created.

(additionally: useful for testing your new optimiser, loss function, &etc. across a zoo of datasets)

Ditto for transfer learning models. A related problem is automatically putting ensemble networks together. Something like:

import some_broke_arch  # pip install some_broke_arch
import other_neat_arch  # pip install other_neat_arch
import horrible_v_arch  # builtin to keras

model   = some_broke_arch.get_arch(   **standard_arch_params  )
metrics = other_neat_arch.get_metrics(**standard_metric_params)
loss    = horrible_v_arch.get_loss(   **standard_loss_params  )

model.compile(loss=loss, optimizer=keras.optimizers.RMSprop, metrics=metrics)
print(model.summary())
# &etc.

In summary, I am petitioning for standard ways of:

  1. exposing algorithms for consumption;
  2. combining algorithms;
  3. comparing algorithms.

To that end, I would recommend encouraging the PyPi folk to add a few new classifiers, and a bunch of us trawl through GitHub every month sending PRs to random repositories—associated with academic papers—linking up with CI/CD so that they are now installable with pip install and searchable by classifier on PyPi.

Related:

  • my open-source multi-ML meta-framework;
    • uses builtin ast and inspect modules to traverse the module, class, and function hierarchy for 10 popular open-source ML/AI frameworks;
    • will enable experimentation with entire 'search-space' of all these ML frameworks (every transfer learning model, optimiser, loss function, &etc.)
    • with a standard way of sharing architectures will be able to expand the 'search-space' with community contributed solutions.
  • this issue from Jul 20, 2019:
---
Reply all
Reply to author
Forward
0 new messages