ModelInferenceAPI¶
- class lightning_app.components.serve.serve.ModelInferenceAPI(input=None, output=None, host='127.0.0.1', port=7777, workers=0)[source]¶
Bases:
lightning_app.core.work.LightningWork
,abc.ABC
The ModelInferenceAPI Class enables to easily get your model served.
- Parameters
input¶ (
Optional
[str
]) – Optional input to be provided. This would make provide a built-in deserializer.output¶ (
Optional
[str
]) – Optional output to be provided. This would make provide a built-in serializer.workers¶ (
int
) – Number of workers for the uvicorn. Warning, this won’t work if your subclass takes more arguments.
- configure_layout()[source]¶
Configure the UI of this LightningWork.
You can either
Return a single
Frontend
object to serve a user interface for this Work.Return a string containing a URL to act as the user interface for this Work.
Return
None
to indicate that this Work doesn’t currently have a user interface.
Example: Serve a static directory (with at least a file index.html inside).
from lightning_app.frontend import StaticWebFrontend class Work(LightningWork): def configure_layout(self): return StaticWebFrontend("path/to/folder/to/serve")
Example: Arrange the UI of my children in tabs (default UI by Lightning).
class Work(LightningWork): def configure_layout(self): return [ dict(name="First Tab", content=self.child0), dict(name="Second Tab", content=self.child1), dict(name="Lightning", content="https://lightning.ai"), ]
If you don’t implement
configure_layout
, Lightning will useself.url
.Note
This hook gets called at the time of app creation and then again as part of the loop. If desired, a returned URL can depend on the state. This is not the case if the work returns a
Frontend
. These need to be provided at the time of app creation in order for the runtime to start the server.- Return type