public class IgniteDistributedModelBuilder extends Object implements AsyncModelBuilder
Model
).
The common workflow is based on a request/response queues and multiple workers represented by Apache Ignite services.
When the build(ModelReader, ModelParser)
method is called Apache Ignite starts the specified number of
service instances and request/response queues. Each service instance reads request queue, processes inbound requests
and writes responses to response queue. The facade returned by the build(ModelReader, ModelParser)
method operates with request/response queues. When the Model.predict(Object)
method is called the argument
is sent as a request to the request queue. When the response is appeared in the response queue the Future
correspondent to the previously sent request is completed and the processing finishes.
Be aware that Model.close()
method must be called to clear allocated resources, stop services and remove
queues.Constructor and Description |
---|
IgniteDistributedModelBuilder(Ignite ignite,
int instances,
int maxPerNode)
Constructs a new instance of Ignite distributed inference model builder.
|
Modifier and Type | Method and Description |
---|---|
<I extends Serializable,O extends Serializable> |
build(ModelReader reader,
ModelParser<I,O,?> parser)
Starts the specified in constructor number of service instances and request/response queues.
|
public IgniteDistributedModelBuilder(Ignite ignite, int instances, int maxPerNode)
ignite
- Ignite instance.instances
- Number of service instances maintaining to make distributed inference.maxPerNode
- Max per node number of instances.public <I extends Serializable,O extends Serializable> Model<I,Future<O>> build(ModelReader reader, ModelParser<I,O,?> parser)
Model
operates with request/response queues, but hides these details
behind Model.predict(Object)
method of Model
.
Be aware that Model.close()
method must be called to clear allocated resources, stop services and
remove queues.build
in interface AsyncModelBuilder
I
- Type of model input.O
- Type of model output.reader
- Inference model reader.parser
- Inference model parser.Model
.
GridGain In-Memory Computing Platform : ver. 8.9.14 Release Date : November 5 2024