Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Welcome to the Getting Started guide, This is what you will achieve by following the next steps:

  • Getting Saga
  • Setting up the environment
  • Deploying Saga
  • Running Saga
  • Checking that it's working!

Prerequisites


  1. ElasticSearch 7+/Opensearch v. 1.1
  2. Java 11 or above

Step 1: Getting Saga


You'll be able to get it from binaries from here (SCA Artifactory repository).You can also find documentation here (Saga MS Teams site):

From this version and on, we provide docker images:

Saga image (lightweight):

docker.repository.sca.accenture.com/docker/saga-server:1.3.1

Saga image containing the tensorFlow model used in FAQ recognizer:

docker.repository.sca.accenture.com/docker/saga-server:1.3.1-tensor

Python Bridge:

docker.repository.sca.accenture.com/docker/saga-python-bridge:1.3.1


  1. Saga introductory presentation
  2. Saga user manual
Note

Creators and users of Saga are subscribed to this Saga team, so you can always publish a message to get some help if you have questions or comments.


Step 2: Set 
up the environment


  1. Check that Java 11 is installed on your machine by running on terminal (or system console):

    $> java -version

  2. Unpackage Saga in your preferred location. 
    This is our recommended setup but you can pretty much handle the paths as you wish.
    This guide will refer to Saga's working directory as {SAGA_HOME}.
  3. Saga uses ElasticSearch (7+) or Opensearch (v.1.1).
    1. Deploy ElasticSearch (ES) or Opensearch
    2. Run ES by executing the binary on {ELASTIC_HOME}/bin.



Tip

If you have never ran Saga against your Elasticsearch, your Elasticsearch will be empty.

That's ok because Saga will generate all the necessary indexes with the minimum default data (base pipeline, executors,...); although you need to add new tags and resources.


Step 3: Deploy Saga


Once you have Saga in {SAGA_HOME} validate the following:

There is a {SAGA_HOME}/lib folder containing the following JARs:

  1. saga-classification-trainer-stage-1.3.1
  2. saga-faq-stage-1.3.1.jar
  3. saga-lang-detector-stage-1.3.1.jar
  4. saga-name-trainer-stage-1.3.1.jar
  5. saga-parts-of-speech-stage-1.3.1.jar
  6. saga-sentence-breaker-stage-1.3.1.jar
  7. saga-spellchecking-stage-1.3.1.jar


Check the configuration base on what NoSQL DB provider used.

    ES : How To Connect To Elasticsearch

    Opensearch: How To Connect To OpenSearch


Check the basic configuration on {SAGA_HOME}/config/config.json:

  1. "apiPort": 8080 → The port used by the server.
  2. "host": "0.0.0.0" → This IP/mask is used to restrict inbound connections, open to all connections by default.
  3. "security" → Security access to the Saga Server and Management UI (recommended only when ssl is also enabled)
    1. "enable": true → Indicates the use of security
    2. "encryptionKeyFile": → file containing the encryption key.  A file is provided by default but it is recommended to change it.
    3. "users.username" → Specify the username to use
    4. "users.password" → Specify the password for the username. Can be either plain or encrypted
  4. "ssl" → Enables SSL for the communication with the Saga Server
    1. enable": true → Indicates the use of ssl
    2. keyStore → Path to the keyStore
    3. keyStorePassword → Password of the keyStore. Can be either plain or encrypted
  5. "libraryJars": ["./lib"] → Folder where library jars are located
  6. If you are running elasticseach in localhost then you don't need to change anything. But if your elasticsearch runs somewhere else then you'll need to adjust the "nodeUrls" property in both "elasticSearch" solution and "Elastic" provider. In case you have a cluster, you can specify them separated by comma, for example: " nodeUrls": ["elastic1:9200", "elastic2:9200", "elastic3:9200"],


If you have some valid "models" you'd like to include them on the server:

  1. Create a {SAGA-HOME}/nt-models folder for "name trainers" and copy the model there.
  2. Create a {SAGA-HOME}/ct-models folder for "classification trainers" and copy the model there.
  3. Create a {SAGA-HOME}/tf-models folder for "FAQ" (uses TensorFlow) and copy the model there.

To add datasets:

  1. Create a {SAGA-HOME}/datasets folder.
  2. Each dataset must be placed in its own folder. This folder name will be the one displayed for "test runs".
  3. Each data document in the dataset must be compliant with Saga's data file JSON format.
  4. Each folder must contain a ".metadata" file with information about the dataset and how to read it.
    You can check out the dataset format here.

Step 4: Run Saga


To run Saga:

Check that ElasticSearch is running.

Use the bundled startup script on {SAGA_HOME}/bin (either startup.bat for Windows or startup.sh for Linux and Mac).

If you didn't change the default port on the configuration, you should be able to access Saga UI at http://localhost:8080/.
If not, then check your configuration for the right port.


Step 5: Run Python Bridge Server


Saga has a python recognizer and python stage that can be used to process text using machine learning python models like Bert.

In case you need this, follow instruction on how to setup and run the python bridge here.