Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

UNDER CONSTRUCTION

Welcome to the Getting Started guide,

this

This is what you will achieve by

follow

following the next steps:

  • Getting Saga
Setup
  • Setting up the environment
.
Deploy
  • Deploying Saga
.
Run
  • Running Saga
.
Check
  • Checking that it's working!
.

Prerequisites

Elasticsearch 6.4


Software

  1. Elasticsearch 7+ (versions 7.14 - 7.17 for 1.3.3) (Also 8.8.1 for 1.3.3/ 8.12.2 for 1.3.4)/OpenSearch v. 1.1 or above
.
  1. Java 11 or above, Java 17 for versions 1.

Note

This guide assumes you already have a stable Saga release such as 1.0 or above.

Step 1: Setup the enviroment

  1. 3.3/1.3.4

Hardware

  1. RAM: 10 Gb free
  2. CPU: 2 cores


Info

Hardware prerequisite above is just a starting point.  It depends a lot on how you configure Saga and what is your processing time expectation.

  • Saga comes configured by default to use from 6 to 10 GB of JVM heap. This configuration can be changed in bin\startup.bat or bin\startup.sh files.  Memory consumption depends mostly on how big are the dictionaries loaded into Saga.  If you are not using dictionaries at all or they are very small you can decrease memory configuration accordingly.  If your dictionaries are big, let's say more than 2 million entries total for example, you may need to increase the 10 Gb limit.
  • Saga is very CPU intensive, the more cores the server has the faster data will get processed.  Therefore number of cores depends on your specific processing time requirements.



Step 1: Getting Saga


You'll be able to get binaries from here (SCA Artifactory repository).

From this version and on, we provide docker images:

Saga image (lightweight):

docker.repository.sca.accenture.com/docker/saga-server:1.3.4-javacio17-base

docker.repository.sca.accenture.com/docker/saga-server:1.3.4-alpine3.19

Saga image containing the tensorFlow model used in FAQ recognizer:

docker.repository.sca.accenture.com/docker/saga-server:1.3.4-tensor-javacio17-base

docker.repository.sca.accenture.com/docker/saga-server:1.3.4-tensor-alpine3.19

Python Bridge:

docker.repository.sca.accenture.com/docker/saga-python-bridge:1.3.4-ubuntu22.04cio-base-basic

docker.repository.sca.accenture.com/docker/saga-python-bridge:1.3.4-ubuntu22.04cio-base-all

docker.repository.sca.accenture.com/docker/saga-python-bridge:1.3.4-debian12-basic

docker.repository.sca.accenture.com/docker/saga-python-bridge:1.3.4-debian12-all

Note

You will need permissions to download binaries or docker images.

Please send your access request to [email protected]



Note

The Saga team has a "Teams" Team here.

Creators and users of Saga are subscribed to this Saga team, so you can always publish a message to get some help if you have questions or comments.


Step 2: Set 
up the environment


  1. Check that Java 11 (or 17 for 1.3.3/1.3.4)
Check Java 11
  1. is installed on your machine by running on terminal (or system console):

    $> java -version
Uncompress

  1. Unpackage Saga in your
prefered
  1. preferred location.
 This
  1.  
    This is our recommended setup but you can pretty much handle the paths as you wish.
    This guide will refer to Saga's working directory as {SAGA_HOME}.
  2. Saga uses
Elasticsearch (6.4.1 or above) and you can get it here
  1. ElasticSearch (7+ and also 8 for 1.3.3/1.3.4) or Opensearch (v.1.1/v.2.X).
    1. Deploy Elasticsearch (ES)
under {SAGA_HOME} in something like {SAGA_HOME}/Elasticsearch-6.4.1.
    1. or OpenSearch
    2. Run ES by executing the binary
on 
    1. on {
SAGA
    1. ELASTIC_HOME}/
Elasticsearch-6.4.1/
    1. bin.



Tip
Saga can run on an empty ES instance, you'd

If you have never ran Saga against your Elasticsearch, your Elasticsearch will be empty.

That's ok because Saga will generate all the necessary indexes with the minimum default data (base pipeline, executors,...); although you need to add new tags and resources.


Step

2

3: Deploy Saga


Once you have Saga in {SAGA_HOME} validate the following:

There is a {SAGA_HOME}/lib folder containing the following JARs:

  1. saga-classification-trainer-stage-1.
0.0-SNAPSHOT
  1. 3.4
  2. saga-faq-stage-1.3.4
  3. saga-
elastic
  1. intent-
provider
  1. stage-1.
0.0-SNAPSHOT
  1. 3.4
  2. saga-lang-detector-stage-1.3.4
  3. saga-name-trainer-stage-1.3.4
  4. saga-parts-of-speech-stage-1.3.4
  5. saga-sentence-breaker-stage-1.
0.0-SNAPSHOT
  1. 3.4
  2. saga-spellchecking-stage-1.3.4


Check the configuration base on what NoSQL DB provider used.

    ES : How To Connect To Elasticsearch

    Opensearch: How To Connect To OpenSearch


Check the basic configuration on {SAGA_HOME}/config/config.json:

  1. "
airPort
  1. apiPort":
8080 → this is the
  1. 8080 → The port used by the server.
  2. "
ipAdress
  1. host": "0.0.0.0" →
this
  1. This IP/mask is used to restrict inbound connections, open to all connections by default.
  2. "
logger" → each logger level config per handler.
  • "provider" → data providers, mainly used to specify location of resource files like dictionaries and ES configuration
    • New filesystem providers can be added to group diferent resource files.
    • ES configuration includes the "port" to connect to, this is by default 9200, you may change it to fit your environment.
    1. security" → Security access to the Saga Server and Management UI (recommended only when ssl is also enabled)
      1. "enable": true → Indicates the use of security
      2. "encryptionKeyFile": → file containing the encryption key.  A file is provided by default but it is recommended to change it.
      3. "users.username" → Specify the username to use
      4. "users.password" → Specify the password for the username. Can be either plain or encrypted
    2. "ssl" → Enables SSL for the communication with the Saga Server
      1. enable": true → Indicates the use of ssl
      2. keyStore → Path to the keyStore
      3. keyStorePassword → Password of the keyStore. Can be either plain or encrypted
    3. "libraryJars": ["./lib"] → Folder where library jars are located
    4. If you are running elasticseach in localhost then you don't need to change anything. But if your elasticsearch runs somewhere else then you'll need to adjust the "nodeUrls" property in both "elasticSearch" solution and "Elastic" provider. In case you have a cluster, you can specify them separated by comma, for example: " nodeUrls": ["elastic1:9200", "elastic2:9200", "elastic3:9200"],
    Info

    For more information about Saga Configuration, check this

    "solutions" → bundle solution schema, it values may be change to have multiple servers with diferent "solutions" or to switch from one to another.A solution work as a domain.  By default the "saga" solution creates ES indexes using the pattern "saga-<index>" and only loads indexes with the same patter.  So you could have multiple solutions on a ES server.  To switch between solution you'd need to shut down the server, change the "indexName" value and restart the server

    .


    If you have some valid "models" you'd like to include them on the server:

    1. Create a {SAGA-HOME}/nt-models folder for "name trainers" and copy the model there.
    2. Create a {SAGA-HOME}/ct-models folder for "
    clasification trainers"
    1. classification trainers" and copy the model there.
    2. Create a {SAGA-HOME}/tf-models folder for "FAQ" (uses TensorFlow) and copy the model there.

    To add datasets:

    1. Create a {SAGA-HOME}/datasets folder.
    2. Each dataset must be placed
    on each
    1. in its own folder
    , this
    1. . This folder name will be the one displayed for "test runs".
    2. Each data document in the dataset must be compliant with Saga's data file JSON format.
    3. Each folder must contain a ".metadata" file with information about the dataset and how to read it.
     

    1. You can check out the dataset format here.

    Step

    3

    4: Run Saga


    To run Saga:

    Check that Elasticsearch is running.

    Use the bundled startup script on {SAGA_HOME}/bin (either startup.bat for Windows or startup.sh for Linux and Mac).

    If you didn't change the default port on the configuration, you should be able to access Saga UI at http://localhost:8080/

    .
    If not, then check your configuration for the right port.


    Step 5: Run Python Bridge Server


    Saga has a python recognizer and python stage that can be used to process text using machine learning python models like Bert.

    In case you need this, follow instruction on how to setup and run the python bridge here.



    Panel

    On this page:

    Table of Contents

    Related pages

    Content by LabelshowLabelsfalsespacessaga131showSpacefalsesorttitletypepagecqllabel = "documentation-space-sample" and type = "page" and space = "saga131"labelsdocumentation-space-sample