Skip to content
Erika Dsouza edited this page Jan 19, 2018 · 16 revisions

Version 1.0 (2017-11-13)

This version of the SDK accepts either models or dicts as input parameters and produces dicts as method responses. Models for response classes are still generated and not pruned, so users can create a model from the returned dict.

Conversation

  • message() parameter message_input renamed to input

Discovery

  • create_configuration() parameter config_data={"name": ""} renamed to name
  • update_configuration() parameter config_data={"name": ""} renamed to name
  • add_document() parameter file_data is removed. File contents are now passed with the file/filename parameters.
  • update_document() parameters mime_type renamed to file_content_type, file_info and file_data replaced by file, and filename is the file name given to the file
  • Some methods have been renamed:
    • get_environments -> list_environments
    • test_document -> test_configuration_in_environment
    • get_document -> get_document_status
    • delete_training_data -> delete_all_training_data
    • add_training_data_query -> add_training_data
    • delete_training_data_query -> delete_training_data
    • get_training_data_query -> get_training_data
    • add_training_data_query_example -> create_training_example
    • delete_training_data_query_example -> delete_training_example
    • get_training_data_query_example -> get_training_example
    • update_training_data_query_example -> update_training_example
  • list_training_data_query_examples() is removed

Language Translator

  • Some methods have been renamed:
    • get_models -> list_models
    • get_identifiable_languages -> list_identifiable_languages

Natural Language Classifier

  • Some methods have been renamed:
    • list -> list_classifiers
    • status -> get_classifier
    • create -> create_classifier
      • create_classifier() parameter metadata has been added
    • remove -> delete_classifier

Natural Language Understanding

  • analyze() parameter limit_text_characters has been added

  • Dropped hand-written Features module in favor of generated Features model. For example:

    natural_language_understanding.analyze(
    text='Messi is the best',
    features=[Features.Entities(), Features.Keywords ()])

    is now:

    natural_language_understanding.analyze(
    text='Messi is the best',
    features=Features(entities=EntitiesOptions(), keywords=KeywordsOptions()))

Tone Analyzer

  • tone() parameters have been reordered:

    tone(self, tone_input, content_type='application/json', sentences=None, tones=None, content_language=None,
            accept_language=None)
  • tone() parameter text replaced by tone_input

  • tone() parameter content_type default value changed from text/plain to application/json

  • tone() parameters content_language and accept_language have been added

    tone(self, text, tones=None, sentences=None, content_type='text/plain')

    is now:

    tone(tone_input, content_type='application/json', sentences=None, content_language=None, accept_language=None):

Personality Insights

  • profile() parameter text changed to content
  • profile() parameter content_type default value changed from text/plain to application/json
  • profile() parameter accept is removed

Visual Recognition

  • classify parameters images_url, classifier_ids, owners, and xxx replaced with parameters`.

    classify(images_file=images_file, threshold=0.1, classifier_ids=['CarsvsTrucks_1479118188', 'default'])

    is now:

    parameters = json.dumps({'threshold': 0.1, 'classifier_ids': ['CarsvsTrucks_1479118188', 'default']})
    visual_recognition.classify(images_file=images_file, parameters=parameters)
Clone this wiki locally