OOZIE ERROR WHILE RUNNING HIVE WORKFLOW IN HUE 3.12

505 views
Skip to first unread message

awi...@maprtech.com

unread,
Aug 23, 2017, 2:11:41 PM8/23/17
to Hue-Users
Hello

I have Cluster with HUE-3.12.

When I run Hive Oozie workflow I get below error.Please someone explain me what is wrong here.



HUE.ini


# Hue configuration file
# ===================================
#
# For complete documentation about the contents of this file, run
#   $ <hue_root>/build/env/bin/hue config_help
#
# All .ini files under the current directory are treated equally.  Their
# contents are merged to form the Hue configuration, which can
# can be viewed on the Hue at
#   http://<hue_host>:<port>/dump_config


###########################################################################
# General configuration for core Desktop features (authentication, etc)
###########################################################################

[desktop]

  # Set this to a random string, the longer the better.
  # This is used for secure hashing in the session store.
  secret_key=asdf0w993q02495uperw9poijsdfqweoriu23o4iuoweifjlkasdjfwiqeru034590345098

  # Execute this script to produce the Django secret key. This will be used when
  # 'secret_key' is not set.
  ## secret_key_script=

  # Webserver listens on this address and port
  http_host=0.0.0.0
  http_port=8888

  # Time zone name
  time_zone=Australia/Canberra

  # Enable or disable Django debug mode.
  django_debug_mode=true

  # Enable or disable database debug mode.
  ## database_logging=false

  # Whether to send debug messages from JavaScript to the server logs.
  ## send_dbug_messages=false

  # Enable or disable backtrace for server error
  http_500_debug_mode=true

  # Enable or disable memory profiling.
  ## memory_profiler=false

  # Server email for internal error messages
  ## django_server_email='h...@localhost.localdomain'

  # Email backend
  ## django_email_backend=django.core.mail.backends.smtp.EmailBackend

  # Webserver runs as this user
  server_user=mapr
  server_group=mapr

  # This should be the Hue admin and proxy user
  default_user=mapr
  # This should be the hadoop cluster admin
  default_hdfs_superuser=mapr

  default_jobtracker_host=maprfs:///
  # If set to false, runcpserver will not actually start the web server.
  # Used if Apache is being used as a WSGI container.
  ## enable_server=yes

  # Number of threads used by the CherryPy web server
  ## cherrypy_server_threads=40

  # This property specifies the maximum size of the receive buffer in bytes in thrift sasl communication (default 2 MB).
  ## sasl_max_buffer=2 * 1024 * 1024

  # Filename of SSL Certificate
  ## ssl_certificate=

  # Filename of SSL RSA Private Key
  ## ssl_private_key=

  # Filename of SSL Certificate Chain
  ## ssl_certificate_chain=

  # SSL certificate password
  ## ssl_password=

  # Execute this script to produce the SSL password. This will be used when 'ssl_password' is not set.
  ## ssl_password_script=

  # X-Content-Type-Options: nosniff This is a HTTP response header feature that helps prevent attacks based on MIME-type confusion.
  ## secure_content_type_nosniff=true

  # X-Xss-Protection: \"1; mode=block\" This is a HTTP response header feature to force XSS protection.
  ## secure_browser_xss_filter=true

  # X-Content-Type-Options: nosniff This is a HTTP response header feature that helps prevent attacks based on MIME-type confusion.
  ## secure_content_security_policy="script-src 'self' 'unsafe-inline' 'unsafe-eval' *.google-analytics.com *.doubleclick.net *.mathjax.org data:;img-src 'self' *.google-analytics.com *.doubleclick.net http://*.tile.osm.org *.tile.osm.org *.gstatic.com data:;style-src 'self' 'unsafe-inline';connect-src 'self';child-src 'self' data: blob:;object-src 'none'"

  # Strict-Transport-Security HTTP Strict Transport Security(HSTS) is a policy which is communicated by the server to the user agent via HTTP response header field name “Strict-Transport-Security”. HSTS policy specifies a period of time during which the user agent(browser) should only access the server in a secure fashion(https).
  ## secure_ssl_redirect=False
  ## secure_redirect_host=0.0.0.0
  ## secure_redirect_exempt=[]
  ## secure_hsts_seconds=31536000
  ## secure_hsts_include_subdomains=true

  # List of allowed and disallowed ciphers in cipher list format.
  # See http://www.openssl.org/docs/apps/ciphers.html for more information on
  # cipher list format. This list is from
  # recommendation, which should be compatible with Firefox 1, Chrome 1, IE 7,
  # Opera 5 and Safari 1.
  ## ssl_cipher_list=ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS:!DH:!ADH

  # Path to default Certificate Authority certificates.
  ## ssl_cacerts=/opt/mapr/hue/hue-3.12.0/cert.pem

  # Choose whether Hue should validate certificates received from the server.
  ## validate=true

  # Default LDAP/PAM/.. username and password of the hue user used for authentications with other services.
  # Inactive if password is empty.
  # e.g. LDAP pass-through authentication for HiveServer2 or Impala. Apps can override them individually.
  ## auth_username=hue
  ## auth_password=

  # Default encoding for site data
  ## default_site_encoding=utf-8

  # Help improve Hue with anonymous usage analytics.
  # Use Google Analytics to see how many times an application or specific section of an application is used, nothing more.
  ## collect_usage=true

  # Tile layer server URL for the Leaflet map charts
  # Make sure you add the tile domain to the img-src section of the 'secure_content_security_policy' configuration parameter as well.
  ## leaflet_tile_layer=http://{s}.tile.osm.org/{z}/{x}/{y}.png

  # The copyright message for the specified Leaflet maps Tile Layer
  ## leaflet_tile_layer_attribution='&copy; <a href="http://osm.org/copyright">OpenStreetMap</a> contributors'

  # X-Frame-Options HTTP header value. Use 'DENY' to deny framing completely
  ## http_x_frame_options=SAMEORIGIN

  # Enable X-Forwarded-Host header if the load balancer requires it.
  ## use_x_forwarded_host=false

  # Support for HTTPS termination at the load-balancer level with SECURE_PROXY_SSL_HEADER.
  ## secure_proxy_ssl_header=false

  # Comma-separated list of Django middleware classes to use.
  # See https://docs.djangoproject.com/en/1.4/ref/middleware/ for more details on middlewares in Django.
  ## middleware=desktop.auth.backend.LdapSynchronizationBackend

  # Comma-separated list of regular expressions, which match the redirect URL.
  # For example, to restrict to your local domain and FQDN, the following value can be used:
  # ^\/.*$,^http:\/\/www.mydomain.com\/.*$
  ## redirect_whitelist=^(\/[a-zA-Z0-9]+.*|\/)$

  # Comma separated list of apps to not load at server startup.
  # e.g.: pig,zookeeper
  app_blacklist=search,zookeeper

  # Choose whether to show the new SQL editor.
  ## use_new_editor=true

  # Choose whether to show the improved assist panel and the right context panel
  ## use_new_side_panels=false

  # Editor autocomplete timeout (ms) when fetching columns, fields, tables etc.
  # To disable this type of autocompletion set the value to 0
  ## editor_autocomplete_timeout=5000

  # Enable saved default configurations for Hive, Impala, Spark, and Oozie.
  ## use_default_configuration=false

  # The directory where to store the auditing logs. Auditing is disable if the value is empty.
  # e.g. /var/log/hue/audit.log
  ## audit_event_log_dir=

  # Size in KB/MB/GB for audit log to rollover.
  ## audit_log_max_file_size=100MB

  # A json file containing a list of log redaction rules for cleaning sensitive data
  # from log files. It is defined as:
  #
  # {
  #   "version": 1,
  #   "rules": [
  #     {
  #       "description": "This is the first rule",
  #       "trigger": "triggerstring 1",
  #       "search": "regex 1",
  #       "replace": "replace 1"
  #     },
  #     {
  #       "description": "This is the second rule",
  #       "trigger": "triggerstring 2",
  #       "search": "regex 2",
  #       "replace": "replace 2"
  #     }
  #   ]
  # }
  #
  # Redaction works by searching a string for the [TRIGGER] string. If found,
  # the [REGEX] is used to replace sensitive information with the
  # [REDACTION_MASK].  If specified with 'log_redaction_string', the
  # 'log_redaction_string' rules will be executed after the
  # 'log_redaction_file' rules.
  #
  # For example, here is a file that would redact passwords and social security numbers:

  # {
  #   "version": 1,
  #   "rules": [
  #     {
  #       "description": "Redact passwords",
  #       "trigger": "password",
  #       "search": "password=\".*\"",
  #       "replace": "password=\"???\""
  #     },
  #     {
  #       "description": "Redact social security numbers",
  #       "trigger": "",
  #       "search": "\d{3}-\d{2}-\d{4}",
  #       "replace": "XXX-XX-XXXX"
  #     }
  #   ]
  # }
  ## log_redaction_file=

  # Comma separated list of strings representing the host/domain names that the Hue server can serve.
  # e.g.: localhost,domain1,*
  ## allowed_hosts="host.domain1"

  # Administrators
  # ----------------
  [[django_admins]]
    ## [[[admin1]]]
    ## name=john
    ## email=jo...@doe.com

  # UI customizations
  # -------------------
  [[custom]]

    # Top banner HTML code
    # e.g. <H4>Test Lab A2 Hue Services</H4>
    ## banner_top_html=

    # Login splash HTML code
    # e.g. WARNING: You are required to have authorization before you proceed
    ## login_splash_html=<h4>GetHue.com</h4><br/><br/>WARNING: You have accessed a computer managed by GetHue. You are required to have authorization from GetHue before you proceed.

    # Cache timeout in milliseconds for the assist, autocomplete, etc.
    # defaults to 86400000 (1 day), set to 0 to disable caching
    ## cacheable_ttl=86400000

    # SVG code to replace the default Hue logo in the top bar and sign in screen
    # e.g. <image xlink:href="/static/desktop/art/hue-logo-mini-white.png" x="0" y="0" height="40" width="160" />
    ## logo_svg=

  # Configuration options for user authentication into the web application
  # ------------------------------------------------------------------------
  [[auth]]

    # Authentication backend. Common settings are:
    # - django.contrib.auth.backends.ModelBackend (entirely Django backend)
    # - desktop.auth.backend.AllowAllBackend (allows everyone)
    # - desktop.auth.backend.AllowFirstUserDjangoBackend
    #     (Default. Relies on Django and user manager, after the first login)
    # - desktop.auth.backend.LdapBackend
    # - desktop.auth.backend.PamBackend - WARNING: existing users in Hue may be unaccessible if they not exist in OS
    # - desktop.auth.backend.SpnegoDjangoBackend
    # - desktop.auth.backend.RemoteUserDjangoBackend
    # - libsaml.backend.SAML2Backend
    # - libopenid.backend.OpenIDBackend
    # - liboauth.backend.OAuthBackend
    #     (New oauth, support Twitter, Facebook, Google+ and Linkedin
    # Multiple Authentication backends are supported by specifying a comma-separated list in order of priority.
    # However, in order to enable OAuthBackend, it must be the ONLY backend configured.
    #backend=desktop.auth.backend.LdapBackend
    backend=desktop.auth.backend.PamBackend

    # Class which defines extra accessor methods for User objects.
    ## user_aug=desktop.auth.backend.DefaultUserAugmentor

    # The service to use when querying PAM.
    pam_service=sudo sshd login

    # When using the desktop.auth.backend.RemoteUserDjangoBackend, this sets
    # the normalized name of the header that contains the remote user.
    # The HTTP header in the request is converted to a key by converting
    # all characters to uppercase, replacing any hyphens with underscores
    # and adding an HTTP_ prefix to the name. So, for example, if the header
    # is called Remote-User that would be configured as HTTP_REMOTE_USER
    #
    # Defaults to HTTP_REMOTE_USER
    ## remote_user_header=HTTP_REMOTE_USER

    # Ignore the case of usernames when searching for existing users.
    # Supported in remoteUserDjangoBackend and SpnegoDjangoBackend
    ## ignore_username_case=true

    # Forcibly cast usernames to lowercase, takes precedence over force_username_uppercase
    # Supported in remoteUserDjangoBackend and SpnegoDjangoBackend
    ## force_username_lowercase=true

    # Forcibly cast usernames to uppercase, cannot be combined with force_username_lowercase
    ## force_username_uppercase=false

    # Users will expire after they have not logged in for 'n' amount of seconds.
    # A negative number means that users will never expire.
    ## expires_after=-1

    # Apply 'expires_after' to superusers.
    ## expire_superusers=true

    # Users will automatically be logged out after 'n' seconds of inactivity.
    # A negative number means that idle sessions will not be timed out.
    idle_session_timeout=-1

    # Force users to change password on first login with desktop.auth.backend.AllowFirstUserDjangoBackend
    ## change_default_password=false

    # Number of login attempts allowed before a record is created for failed logins
    ## login_failure_limit=3

    # After number of allowed login attempts are exceeded, do we lock out this IP and optionally user agent?
    ## login_lock_out_at_failure=false

    # If set, defines period of inactivity in seconds after which failed logins will be forgotten
    ## login_cooloff_time=60

    # If True, lock out based on an IP address AND a user agent.
    # This means requests from different user agents but from the same IP are treated differently.
    ## login_lock_out_use_user_agent=false

    # If True, lock out based on IP and user
    ## login_lock_out_by_combination_user_and_ip=false

    # If True, it will look for the IP address from the header defined at reverse_proxy_header.
    ## behind_reverse_proxy=false

    # If behind_reverse_proxy is True, it will look for the IP address from this header. Default: HTTP_X_FORWARDED_FOR
    ## reverse_proxy_header=HTTP_X_FORWARDED_FOR

  # Configuration options for connecting to LDAP and Active Directory
  # -------------------------------------------------------------------
  [[ldap]]

    # The search base for finding users and groups
    #base_dn="DC=test,DC=123,DC=exMPLE,DC=com"

    # URL of the LDAP server
   # ldap_url=ldap://ldap.123.com

    # The NT domain used for LDAP authentication
   # nt_domain=test.123.com

    # A PEM-format file containing certificates for the CA's that
    # Hue will trust for authentication over TLS.
    # The certificate for the CA that signed the
    # LDAP server certificate must be included among these certificates.
    ## ldap_cert=
   # use_start_tls=false

    # Distinguished name of the user to bind as -- not necessary if the LDAP server
    # supports anonymous searches
   

    # Password of the bind user -- not necessary if the LDAP server supports
    # anonymous searches
   # bind_password=2rf65HGzx12

    # Execute this script to produce the bind user password. This will be used
    # when 'bind_password' is not set.
    ## bind_password_script=

    # Pattern for searching for usernames -- Use <username> for the parameter
    # For use when using LdapBackend for Hue authentication
    # If nt_domain is specified, this config is completely ignored. 
    # If nt_domain is not specified, this should take on the form "cn=<username>,dc=example,dc=com", 
    # where <username> is replaced by whatever is provided at the login page. Depending on your ldap schema, 
    # you can also specify additional/alternative comma-separated attributes like uid, ou, etc
    ## ldap_username_pattern="uid=<username>,ou=People,dc=mycompany,dc=com"

    # Create users in Hue when they try to login with their LDAP credentials
    # For use when using LdapBackend for Hue authentication
   # create_users_on_login = false

    # Synchronize a users groups when they login
    ## sync_groups_on_login=false

    # Ignore the case of usernames when searching for existing users in Hue.
    ## ignore_username_case=false
   # ignore_username_case=false

    # Force usernames to lowercase when creating new users from LDAP.
    # Takes precedence over force_username_uppercase
    ## force_username_lowercase=true
   # force_username_lowercase=false

    # Force usernames to uppercase, cannot be combined with force_username_lowercase
    ## force_username_uppercase=false

    # Use search bind authentication.
    # If set to true, hue will perform ldap search using bind credentials above (bind_dn, bind_password)
    # Hue will then search using the 'base_dn' for an entry with attr defined in 'user_name_attr', with the value
    # of short name provided on the login page. The search filter defined in 'user_filter' will also be used to limit
    # the search. Hue will search the entire subtree starting from base_dn.
    # If search_bind_authentication is set to false, Hue performs a direct bind to LDAP using the credentials provided
    # (not bind_dn and bind_password specified in hue.ini). There are 2 modes here - 'nt_domain' is specified or not.  
   # search_bind_authentication=true

    # Choose which kind of subgrouping to use: nested or suboordinate (deprecated).
    ## subgroups=suboordinate

    # Define the number of levels to search for nested members.
    ## nested_members_search_depth=10

    # Whether or not to follow referrals
    ## follow_referrals=false

    # Enable python-ldap debugging.
   # debug=true

    # Sets the debug level within the underlying LDAP C lib.
   # debug_level=255

    # Possible values for trace_level are 0 for no logging, 1 for only logging the method calls with arguments,
    # 2 for logging the method calls with arguments and the complete results and 9 for also logging the traceback of method calls.
   # trace_level=9

    [[[users]]]

      # Base filter for searching for users
   #   user_filter="objectClass=user"

      # The username attribute in the LDAP schema
   #   user_name_attr=uid

    [[[groups]]]

      # Base filter for searching for groups
#      group_filter="memberOf={dn}"
#
      # The group name attribute in the LDAP schema
#      group_name_attr=cn
#
      # The attribute of the group object which identifies the members of the group
#      group_member_attr=memberOf

    [[[ldap_servers]]]

      ## [[[[mycompany]]]]

        # The search base for finding users and groups
        ## base_dn="DC=mycompany,DC=com"

        # URL of the LDAP server
        ## ldap_url=ldap://auth.mycompany.com

        # The NT domain used for LDAP authentication
        ## nt_domain=mycompany.com

        # A PEM-format file containing certificates for the CA's that
        # Hue will trust for authentication over TLS.
        # The certificate for the CA that signed the
        # LDAP server certificate must be included among these certificates.
        ## ldap_cert=
        ## use_start_tls=true

        # Distinguished name of the user to bind as -- not necessary if the LDAP server
        # supports anonymous searches
        ## bind_dn="CN=ServiceAccount,DC=mycompany,DC=com"

        # Password of the bind user -- not necessary if the LDAP server supports
        # anonymous searches
        ## bind_password=

        # Execute this script to produce the bind user password. This will be used
        # when 'bind_password' is not set.
        ## bind_password_script=

        # Pattern for searching for usernames -- Use <username> for the parameter
        # For use when using LdapBackend for Hue authentication
        ## ldap_username_pattern="uid=<username>,ou=People,dc=mycompany,dc=com"

        ## Use search bind authentication.
        ## search_bind_authentication=true

        # Whether or not to follow referrals
        ## follow_referrals=false

        # Enable python-ldap debugging.
        ## debug=false

        # Sets the debug level within the underlying LDAP C lib.
        ## debug_level=255

        # Possible values for trace_level are 0 for no logging, 1 for only logging the method calls with arguments,
        # 2 for logging the method calls with arguments and the complete results and 9 for also logging the traceback of method calls.
        ## trace_level=0

        ## [[[[[users]]]]]

          # Base filter for searching for users
          ## user_filter="objectclass=Person"

          # The username attribute in the LDAP schema
          ## user_name_attr=sAMAccountName

        ## [[[[[groups]]]]]

          # Base filter for searching for groups
          ## group_filter="objectclass=groupOfNames"

          # The username attribute in the LDAP schema
          ## group_name_attr=cn

  # Configuration options for specifying the Desktop Database. For more info,
  # ------------------------------------------------------------------------
  [[database]]
    # Database engine is typically one of:
    # postgresql_psycopg2, mysql, sqlite3 or oracle.
    #
    # Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name
    # Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.
    # Note for Oracle, you can use the Oracle Service Name by setting "host=" and "port=" and then "name=<host>:<port>/<service_name>".
    # Note for MariaDB use the 'mysql' engine.
    # TODO XXX
    engine=mysql
    host=127.0.0.1
    port=3306
    user=hue
    password=hue
    name=hue
    # Execute this script to produce the database password. This will be used when 'password' is not set.
    ## password_script=/path/script
    ## name=desktop/desktop.db
    ## options={}
    # Database schema, to be used only when public schema is revoked in postgres
    ## schema=public

  # Configuration options for specifying the Desktop session.
  # ------------------------------------------------------------------------
  [[session]]
    # The cookie containing the users' session ID will expire after this amount of time in seconds.
    # Default is 2 weeks.
    ## ttl=1209600

    # The cookie containing the users' session ID and csrf cookie will be secure.
    # Should only be enabled with HTTPS.
    ## secure=false

    # The cookie containing the users' session ID and csrf cookie will use the HTTP only flag.
    ## http_only=true

    # Use session-length cookies. Logs out the user when she closes the browser window.
    ## expire_at_browser_close=false


  # Configuration options for connecting to an external SMTP server
  # ------------------------------------------------------------------------
  [[smtp]]

    # The SMTP server information for email notification delivery
    
    port=25
    user=
    password=

    # Whether to use a TLS (secure) connection when talking to the SMTP server
    tls=no

    # Default email address to use for various automated notification from Hue
    ## default_from_email=hue@localhost


  # Configuration options for Kerberos integration for secured Hadoop clusters
  # ------------------------------------------------------------------------
  [[kerberos]]

    # Path to Hue's Kerberos keytab file
   # hue_keytab=/opt/mapr/conf/mapr.keytab
    # Kerberos principal name for Hue
  
    # Path to kinit
   # kinit_path=/usr/bin/kinit


  # Configuration options for using OAuthBackend (Core) login
  # ------------------------------------------------------------------------
  [[oauth]]
    # The Consumer key of the application
    ## consumer_key=XXXXXXXXXXXXXXXXXXXXX

    # The Consumer secret of the application
    ## consumer_secret=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    # The Request token URL

    # The Access token URL

    # The Authorize URL
    ## authenticate_url=https://api.twitter.com/oauth/authorize

  # Configuration options for Metrics
  # ------------------------------------------------------------------------
  [[metrics]]

   # Enable the metrics URL "/desktop/metrics"
   ## enable_web_metrics=True

   # If specified, Hue will write metrics to this file.
   ## location=/var/log/hue/metrics.json

   # Time in milliseconds on how frequently to collect metrics
   ## collection_interval=30000


###########################################################################
# Settings to configure the snippets available in the Notebook
###########################################################################

[notebook]

  ## Show the notebook menu or not
  # show_notebooks=true

  ## Flag to enable the bulk submission of queries as a background task through Oozie.
  # enable_batch_execute=true

  ## Flag to enable the SQL query builder of the table assist.
  # enable_query_builder=true

  ## Flag to enable the creation of a coordinator for the current SQL query.
  # enable_query_scheduling=false

  ## Base URL to Remote GitHub Server
  # github_remote_url=https://github.com

  ## Base URL to GitHub API
  # github_api_url=https://api.github.com

  ## Client ID for Authorized GitHub Application
  # github_client_id=

  ## Client Secret for Authorized GitHub Application
  # github_client_secret=

  ## Main flag to override the automatic starting of the DBProxy server.
  # enable_dbproxy_server=true

  ## Classpath to be appended to the default DBProxy server classpath.
  # dbproxy_extra_classpath=

  ## Comma separated list of interpreters that should be shown on the wheel. This list takes precedence over the
  ## order in which the interpreter entries appear. Only the first 5 interpreters will appear on the wheel.
  # interpreters_shown_on_wheel=

  # One entry for each type of snippet.
  [[interpreters]]
    # Define the name and how to connect and execute the language.

    # This interpreter will be disabled automatically if beeswax app blacklisted
    [[[hive]]]
      # The name of the snippet.
      name=Hive
      # The backend connection to use to communicate with the server.
      interface=hiveserver2

    # This interpreter will be disabled automatically if beeswax or impala apps blacklisted
    [[[impala]]]
      name=Impala
      interface=hiveserver2

    # [[[sparksql]]]
    #   name=SparkSql
    #   interface=hiveserver2

    [[[spark]]]
      name=Scala
      interface=livy

    [[[pyspark]]]
      name=PySpark
      interface=livy

    [[[r]]]
      name=R
      interface=livy

    [[[jar]]]
      name=Spark Submit Jar
      interface=livy-batch

    [[[py]]]
      name=Spark Submit Python
      interface=livy-batch

    # HUE-6074 Hue not able to execute oozie snippets
    # [[[pig]]]
    #   name=Pig
    #   interface=oozie

    [[[text]]]
      name=Text
      interface=text

    [[[markdown]]]
      name=Markdown
      interface=text

    [[[mysql]]]
      name = MySQL
      interface=rdbms

    [[[sqlite]]]
      name = SQLite
      interface=rdbms

    [[[postgresql]]]
      name = PostgreSQL
      interface=rdbms

    [[[oracle]]]
      name = Oracle
      interface=rdbms

    [[[solr]]]
      name = Solr SQL
      interface=solr
      ## Name of the collection handler
      # options='{"collection": "default"}'

    # HUE-6074 Hue not able to execute oozie snippets
    # [[[java]]]
    #   name=Java
    #   interface=oozie

    # HUE-6074 Hue not able to execute oozie snippets
    # [[[spark2]]]
    #   name=Spark
    #   interface=oozie

    # HUE-6074 Hue not able to execute oozie snippets
    # [[[mapreduce]]]
    #   name=MapReduce
    #   interface=oozie

    # HUE-6074 Hue not able to execute oozie snippets
    # [[[sqoop1]]]
    #   name=Sqoop1
    #   interface=oozie

    # HUE-6074 Hue not able to execute oozie snippets
    # [[[distcp]]]
    #   name=Distcp
    #   interface=oozie

    # HUE-6074 Hue not able to execute oozie snippets
    # [[[shell]]]
    #   name=Shell
    #   interface=oozie

    # [[[mysql]]]
    #   name=MySql JDBC
    #   interface=jdbc
    #   ## Specific options for connecting to the server.
    #   ## The JDBC connectors, e.g. mysql.jar, need to be in the CLASSPATH environment variable.
    #   ## If 'user' and 'password' are omitted, they will be prompted in the UI.
    #   options='{"url": "jdbc:mysql://localhost:3306/hue", "driver": "com.mysql.jdbc.Driver", "user": "root", "password": "root"}'


###########################################################################
# Settings to configure your Hadoop cluster.
###########################################################################

[hadoop]

  # Configuration for HDFS NameNode
  # ------------------------------------------------------------------------
  [[hdfs_clusters]]
    # HA support by using HttpFs

    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=maprfs:///

      # NameNode logical name.
      ## logical_name=

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      # TODO: calculate this
      

      # Change this if your HDFS cluster is secured
      ## security_enabled=${security_enabled}
      security_enabled=true

      # Security mechanism of authentication none/GSSAPI/MAPR-SECURITY
      ## mechanism=${mechanism}
      mechanism=MAPR-SECURITY

ssl=True
 ssl_cert=$HUE_HOME/cert.pem
       ssl_key=$HUE_HOME/hue_private_keystore.pem

      # Enable mutual ssl authentication
      # mutual_ssl_auth=False
      # ssl_cert=/opt/mapr/hue/hue-3.11.0/cert.pem
      # ssl_key=/opt/mapr/hue/hue-3.11.0/hue_private_keystore.pem

      # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
      # have to be verified against certificate authority
      ca_verify=False

      # File size restriction for viewing file (float)
      # '1.0' - default 1 GB file size restriction
      # '0' - no file size restrictions
      # >0  - set file size restriction in gigabytes, ex. 0.5, 1.0, 1.2...
      ## file_size=1.0

      # Directory of the Hadoop configuration
      ## hadoop_conf_dir=$HADOOP_CONF_DIR when set or '/etc/hadoop/conf'

  # Configuration for YARN (MR2)
  # ------------------------------------------------------------------------
  [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      resourcemanager_host=maprfs:///

      # The port where the ResourceManager IPC listens on
       resourcemanager_port=8032

      # Whether to submit jobs to this cluster
       submit_to=True
      
      # Resource Manager logical name (required for HA)
      #logical_name=rm1

      # Change this if your YARN cluster is secured
       security_enabled=${security_enabled}
      #security_enabled=true

      # Security mechanism of authentication none/GSSAPI/MAPR-SECURITY
       mechanism=${mechanism}
      #mechanism=MAPR-SECURITY

      # URL of the ResourceManager API
      # resourcemanager_api_url=http://tstapp263vs:8088

      # URL of the ProxyServer API
      # proxy_api_url=http://tstapp263vs:8088

      # URL of the HistoryServer API
      # history_server_api_url=http://tstapp263vs:19888

      # URL of the Spark History Server
      #spark_history_server_url=http://tstapp263vs:18088

      # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
      # have to be verified against certificate authority
      ssl_cert_ca_verify=False
      ca_verify=False

    # HA support by specifying multiple clusters.
    # Redefine different properties there.
    # e.g.
#TRISTAN
     #[[[ha]]]
     #  resourcemanager_api_url=http://tstapp262vs:8088
     #  history_server_api_url=http://tstapp262vs:19888
     #  proxy_api_url=http://tstapp262vs:8088
     #  resourcemanager_rpc_url=http://tstapp262vs:8050
     #  logical_name=rm2
     #  submit_to=True

      # Resource Manager logical name (required for HA)
      # logical_name=my-rm-name

      # Un-comment to enable
      # submit_to=True

      # URL of the ResourceManager API
      # resourcemanager_api_url=http://localhost:8088

      # ...

  ## end of yarn_clusters

  # Configuration for MapReduce (MR1)
  # ------------------------------------------------------------------------
  [[mapred_clusters]]

    [[[default]]]
      # Enter the host on which you are running the Hadoop JobTracker
      jobtracker_host=localhost

      # The port where the JobTracker IPC listens on
      jobtracker_port=9001

      # JobTracker logical name for HA
      ## logical_name=

      # Thrift plug-in port for the JobTracker
      ## thrift_port=9290

      # Whether to submit jobs to this cluster
       submit_to=False

      # Change this if your MapReduce cluster is secured
      security_enabled=${security_enabled}

      # Security mechanism of authentication none/GSSAPI/MAPR-SECURITY
      mechanism=${mechanism}

    # HA support by specifying multiple clusters
    # e.g.

    # [[[ha]]]
      # Enter the logical name of the JobTrackers
      # logical_name=my-jt-name

  ## end of mapred_clusters


###########################################################################
# Settings to configure liboozie
###########################################################################

[liboozie]
  # The URL where the Oozie service runs on. This is required in order for
  # users to submit jobs. Empty value disables the config check.

  # Requires FQDN in oozie_url if enabled
  security_enabled=${security_enabled}

  # Location on HDFS where will be created directory to deploy workflows/coordinators when submitted by a non-owner.
  # This directory will be created with 1777 permissions, and should be a root of "remote_deployement_dir".
  ## remote_deployement_root=/oozie/deployments

  # Location on HDFS where the workflows/coordinators are deployed when submitted by a non-owner.
  # Parameters are $TIME, $USER and $JOBID, e.g. /user/$USER/hue/deployments/$JOBID-$TIME.
  ## remote_deployement_dir=/oozie/deployments/_$USER_-oozie-$JOBID-$TIME

  # Security mechanism of authentication none/GSSAPI/MAPR-SECURITY
  mechanism=${mechanism}
###########################################################################
# Settings to configure the Oozie app
###########################################################################

[oozie]
  # Location on local FS where the examples are stored.
  ## local_data_dir=../../examples

  # Location on local FS where the data for the examples is stored.
  ## sample_data_dir=/opt/mapr/hue/hue-3.12.0/ext/thirdparty/sample_data

  # Location on HDFS where the oozie examples and workflows are stored.
  ## remote_data_dir=/user/hue/oozie/workspaces

  # Maximum of Oozie workflows or coodinators to retrieve in one API call.
  ## oozie_jobs_count=50

  # Use Cron format for defining the frequency of a Coordinator instead of the old frequency number/unit.
  ## enable_cron_scheduling=true

  ## Flag to enable the saved Editor queries to be dragged and dropped into a workflow.
  # enable_document_action=false

  ## Flag to enable Oozie backend filtering instead of doing it at the page level in Javascript. Requires Oozie 4.3+.
  # enable_oozie_backend_filtering=false

###########################################################################
# Settings to configure Beeswax with Hive
###########################################################################

[beeswax]

  # Host where HiveServer2 is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
#TRISTAN this was error
  hive_server_host=localhost

  # Port where HiveServer2 Thrift server runs on.
  ## hive_server_port=10000

  # Hive configuration directory, where hive-site.xml is located
  hive_conf_dir=/opt/mapr/hive/hive-2.1/conf

  # Timeout in seconds for thrift calls to Hive service
  ## server_conn_timeout=120

  # Change this if your Hive is secured
  security_enabled=${security_enabled}

  # Security mechanism of authentication none/GSSAPI/MAPR-SECURITY
  mechanism=${mechanism}

  # Path to HiveServer2 start script
  ## hive_server_bin=/opt/mapr/hive/hive-2.1/bin/hiveserver2

  # Choose whether to use the old GetLog() thrift call from before Hive 0.14 to retrieve the logs.
  # If false, use the FetchResults() thrift call from Hive 1.0 or more instead.
  ## use_get_log_api=false

  # Limit the number of partitions that can be listed.
  ## list_partitions_limit=10000

  # The maximum number of partitions that will be included in the SELECT * LIMIT sample query for partitioned tables.
  ## query_partitions_limit=10

  # A limit to the number of rows that can be downloaded from a query before it is truncated.
  # A value of -1 means there will be no limit.
  ## download_row_limit=100000

  # Hue will try to close the Hive query when the user leaves the editor page.
  # This will free all the query resources in HiveServer2, but also make its results inaccessible.
  ## close_queries=false

  # Hue will use at most this many HiveServer2 sessions per user at a time.
  ## max_number_of_sessions=1

  # Thrift version to use when communicating with HiveServer2.
  # New column format is from version 7.
  ## thrift_version=7

  # A comma-separated list of white-listed Hive configuration properties that users are authorized to set.
  ## config_whitelist=hive.map.aggr,hive.exec.compress.output,hive.exec.parallel,hive.execution.engine,mapreduce.job.queuename

  # Override the default desktop username and password of the hue user used for authentications with other services.
  # e.g. Used for LDAP/PAM pass-through authentication.
  ## auth_username=hue
  ## auth_password=

  [[ssl]]
    # Path to Certificate Authority certificates.
    ## cacerts=/etc/hue/cacerts.pem

    # Choose whether Hue should validate certificates received from the server.
    ## validate=true


###########################################################################
# Settings to configure Metastore
###########################################################################

[metastore]
  # Flag to turn on the new version of the create table wizard.
  ## enable_new_create_table=false


###########################################################################
# Settings to configure Impala
###########################################################################

[impala]
  # Host of the Impala Server (one of the Impalad)
  ## server_host=localhost

  # Port of the Impala Server
  ## server_port=21050

  # Kerberos principal
  ## impala_principal=mapr/hostname.foo.com

  # Turn on/off impersonation mechanism when talking to Impala
  impersonation_enabled=False

  # Number of initial rows of a result set to ask Impala to cache in order
  # to support re-fetching them for downloading them.
  # Set to 0 for disabling the option and backward compatibility.
  ## querycache_rows=50000

  # Timeout in seconds for thrift calls
  ## server_conn_timeout=120

  # Hue will try to close the Impala query when the user leaves the editor page.
  # This will free all the query resources in Impala, but also make its results inaccessible.
  ## close_queries=true

  # If > 0, the query will be timed out (i.e. cancelled) if Impala does not do any work
  # (compute or send back results) for that query within QUERY_TIMEOUT_S seconds.
  ## query_timeout_s=0

  # If > 0, the session will be timed out (i.e. cancelled) if Impala does not do any work
  # (compute or send back results) for that session within QUERY_TIMEOUT_S seconds (default 1 hour).
  ## session_timeout_s=3600

  # Override the desktop default username and password of the hue user used for authentications with other services.
  # e.g. Used for LDAP/PAM pass-through authentication.
  ## auth_username=hue
  ## auth_password=

  # A comma-separated list of white-listed Impala configuration properties that users are authorized to set.
  # config_whitelist=debug_action,explain_level,mem_limit,optimize_partition_key_scans,query_timeout_s,request_pool

  # Path to the impala configuration dir which has impalad_flags file
  ## impala_conf_dir=${HUE_CONF_DIR}/impala-conf

  [[ssl]]
    # SSL communication enabled for this server.
    ## enabled=false

    # Path to Certificate Authority certificates.
    ## cacerts=/etc/hue/cacerts.pem

    # Choose whether Hue should validate certificates received from the server.
    ## validate=true


###########################################################################
# Settings to configure the Spark application.
###########################################################################

[spark]
  # Host address of the Livy Server.
  ## livy_server_host=localhost

  # Port of the Livy Server.
  ## livy_server_port=8998

  # Configure Livy to start in local 'process' mode, or 'yarn' workers.
  ## livy_server_session_kind=yarn

  # Host of the Sql Server
  ## sql_server_host=localhost

  # Port of the Sql Server
  ## sql_server_port=10000


###########################################################################
# Settings to configure the Filebrowser app
###########################################################################

[filebrowser]
  # Location on local filesystem where the uploaded archives are temporary stored.
  ## archive_upload_tempdir=/tmp

  # Show Download Button for HDFS file browser.
  ## show_download_button=false

  # Show Upload Button for HDFS file browser.
  ## show_upload_button=false

  ## Flag to enable the extraction of a uploaded archive in HDFS.
  # enable_extract_uploaded_archive=false


###########################################################################
# Settings to configure Pig
###########################################################################

[pig]
  # Location of piggybank.jar on local filesystem.
  ## local_data_dir=/opt/mapr/pig/pig-0.16/contrib/piggybank/java

  # Location piggybank.jar will be copied to in HDFS.
  ## remote_data_dir=/oozie/pig/examples


###########################################################################
# Settings to configure Sqoop2
###########################################################################

[sqoop]
  # For autocompletion, fill out the librdbms section.

  # Sqoop server URL

  # Change this if your cluster is secured
  security_enabled=${security_enabled}

  # Security mechanism of authentication none/GSSAPI/MAPR-SECURITY
  mechanism=${mechanism}


###########################################################################
# Settings to configure Proxy
###########################################################################

[proxy]
  # Comma-separated list of regular expressions,
  # which match 'host:port' of requested proxy target.
  ## whitelist=(localhost|127\.0\.0\.1):(50030|50070|50060|50075)

  # Comma-separated list of regular expressions,
  # which match any prefix of 'host:port/path' of requested proxy target.
  # This does not support matching GET parameters.
  ## blacklist=


###########################################################################
# Settings to configure HBase Browser
###########################################################################

[hbase]
  # Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.
  # Use full hostname with security.
  # If using Kerberos we assume GSSAPI SASL, not PLAIN.
  

  # HBase configuration directory, where hbase-site.xml is located.
  ## hbase_conf_dir=/opt/mapr/hbase/hbase-1.1.8/

  # Hard limit of rows or columns per row fetched before truncating.
  ## truncate_limit = 500

  # 'buffered' is the default of the HBase Thrift Server and supports security.
  # 'framed' can be used to chunk up responses,
  # which is useful when used in conjunction with the nonblocking server in Thrift.
  ## thrift_transport=buffered

  # Security mechanism of authentication none/GSSAPI/MAPR-SECURITY
  mechanism=${mechanism}


###########################################################################
# Settings to configure Solr Search
###########################################################################

[search]

  # URL of the Solr Server

  # Requires FQDN in solr_url if enabled
  security_enabled=${security_enabled}

  ## Query sent when no term is entered
  ## empty_query=*:*

  # Use latest Solr 5.2+ features.
  ## latest=false


###########################################################################
# Settings to configure Solr API lib
###########################################################################

[libsolr]

  # Choose whether Hue should validate certificates received from the server.
  ## ssl_cert_ca_verify=true

  # Default path to Solr in ZooKeeper.
  ## solr_zk_path=/solr


###########################################################################
# Settings to configure Solr Indexer
###########################################################################

[indexer]

  # Location of the solrctl binary.
  ## solrctl_path=/usr/bin/solrctl

  # Flag to turn on the morphline based Solr indexer.
  ## enable_new_indexer=false

  # Flag to turn on the new metadata importer.
  ## enable_new_importer=false


###########################################################################
# Settings to configure Job Designer
###########################################################################

[jobsub]

  # Location on local FS where examples and template are stored.
  ## local_data_dir=..../data

  # Location on local FS where sample data is stored
  ## sample_data_dir=...thirdparty/sample_data


###########################################################################
# Settings to configure Job Browser.
###########################################################################

[jobbrowser]
  # Share submitted jobs information with all users. If set to false,
  # submitted jobs are visible only to the owner and administrators.
  ## share_jobs=true

  # Whether to disalbe the job kill button for all users in the jobbrowser
  ## disable_killing_jobs=false

  # Offset in bytes where a negative offset will fetch the last N bytes for the given log file (default 1MB).
  ## log_offset=-1000000

  # Show the version 2 of app which unifies all the past browsers into one.
  ## enable_v2=false


###########################################################################
# Settings to configure Sentry / Security App.
###########################################################################

[security]

  # Use Sentry API V1 for Hive.
  ## hive_v1=true

  # Use Sentry API V2 for Hive.
  ## hive_v2=false

  # Use Sentry API V2 for Solr.
  ## solr_v2=true


###########################################################################
# Settings to configure the Zookeeper application.
###########################################################################

[zookeeper]

  [[clusters]]

    [[[default]]]
      # Zookeeper ensemble. Comma separated list of Host/Port.
      # e.g. localhost:5181,node2_ip@:5181,node3_ip@:5181
      host_ports=localhost:5181

      # The URL of the REST contrib service (required for znode browsing).
      rest_url=http://localhost:9999

      # Name of Kerberos principal when using security.
      ## principal_name=zookeeper


###########################################################################
# Settings for the User Admin application
###########################################################################

[useradmin]
  # Default home directory permissions
  ## home_dir_permissions=0755

  # The name of the default user group that users will be a member of
  ## default_user_group=default

  [[password_policy]]
    # Set password policy to all users. The default policy requires password to be at least 8 characters long,
    # and contain both uppercase and lowercase letters, numbers, and special characters.

    ## is_enabled=false
    ## pwd_regex="^(?=.*?[A-Z])(?=(.*[a-z]){1,})(?=(.*[\d]){1,})(?=(.*[\W_]){1,}).{8,}$"
    ## pwd_hint="The password must be at least 8 characters long, and must contain both uppercase and lowercase letters, at least one number, and at least one special character."
    ## pwd_error_message="The password must be at least 8 characters long, and must contain both uppercase and lowercase letters, at least one number, and at least one special character."


###########################################################################
# Settings for the AWS lib
###########################################################################

[aws]
  [[aws_accounts]]
    # Default AWS account
    ## [[[default]]]
      # AWS credentials
      ## access_key_id=
      ## secret_access_key=
      ## security_token=

      # Execute this script to produce the AWS access key ID.
      ## access_key_id_script=/path/access_key_id.sh

      # Execute this script to produce the AWS secret access key.
      ## secret_access_key_script=/path/secret_access_key.sh

      # Allow to use either environment variables or
      # EC2 InstanceProfile to retrieve AWS credentials.
      ## allow_environment_credentials=yes

      # AWS region to use
      ## region=us-east-1

      # Endpoint overrides
      ## proxy_address=
      ## proxy_port=

      # Secure connections are the default, but this can be explicitly overridden:
      ## is_secure=true

      # The default calling format uses https://<bucket-name>.s3.amazonaws.com but
      # this may not make sense if DNS is not configured in this way for custom endpoints.
      # e.g. Use boto.s3.connection.OrdinaryCallingFormat for https://s3.amazonaws.com/<bucket-name>
      ## calling_format=boto.s3.connection.S3Connection.DefaultCallingFormat


###########################################################################
# Settings for the Sentry lib
###########################################################################

[libsentry]
  # Hostname or IP of server.
  ## hostname=localhost

  # Port the sentry service is running on.
  ## port=8038

  # Sentry configuration directory, where sentry-site.xml is located.
  ## sentry_conf_dir=/opt/mapr/sentry/sentry-1.7.0/conf


###########################################################################
# Settings to configure the ZooKeeper Lib
###########################################################################

[libzookeeper]
  # ZooKeeper ensemble. Comma separated list of Host/Port.
  # e.g. localhost:2181,localhost:2182,localhost:2183
  ## ensemble=localhost:2181

  # Name of Kerberos principal when using security.
  ## principal_name=zookeeper


###########################################################################
# Settings for the RDBMS application
###########################################################################

[librdbms]
  # The RDBMS app can have any number of databases configured in the databases
  # section. A database is known by its section name
  # (IE sqlite, mysql, psql, and oracle in the list below).

  [[databases]]
    # sqlite configuration.
    ## [[[sqlite]]]
      # Name to show in the UI.
      ## nice_name=SQLite

      # For SQLite, name defines the path to the database.
      ## name=/opt/mapr/hue/hue-3.10.0/desktop/desktop.db

      # Database backend to use.
      ## engine=sqlite

      # Database options to send to the server when connecting.
      ## options={}

    # mysql, oracle, or postgresql configuration.
    ## [[[mysql]]]
      # Name to show in the UI.
      ## nice_name="My SQL DB"

      # For MySQL and PostgreSQL, name is the name of the database.
      # For Oracle, Name is instance of the Oracle server. For express edition
      # this is 'xe' by default.
      ## name=mysqldb

      # Database backend to use. This can be:
      # 1. mysql
      # 2. postgresql
      # 3. oracle
      ## engine=mysql

      # IP or hostname of the database to connect to.
      ## host=localhost

      # Port the database server is listening to. Defaults are:
      # 1. MySQL: 3306
      # 2. PostgreSQL: 5432
      # 3. Oracle Express Edition: 1521
      ## port=3306

      # Username to authenticate with when connecting to the database.
      ## user=example

      # Password matching the username to authenticate with when
      # connecting to the database.
      ## password=example

      # Database options to send to the server when connecting.
      ## options={}


###########################################################################
# Settings to configure SAML
###########################################################################

[libsaml]
  # Xmlsec1 binary path. This program should be executable by the user running Hue.
  ## xmlsec_binary=/usr/local/bin/xmlsec1

  # Entity ID for Hue acting as service provider.
  # Can also accept a pattern where '<base_url>' will be replaced with server URL base.
  ## entity_id="<base_url>/saml2/metadata/"

  # Create users from SSO on login.
  ## create_users_on_login=true

  # Required attributes to ask for from IdP.
  # This requires a comma separated list.
  ## required_attributes=uid

  # Optional attributes to ask for from IdP.
  # This requires a comma separated list.
  ## optional_attributes=

  # IdP metadata in the form of a file. This is generally an XML file containing metadata that the Identity Provider generates.
  ## metadata_file=

  # Private key to encrypt metadata with.
  ## key_file=

  # Signed certificate to send along with encrypted metadata.
  ## cert_file=

  # Path to a file containing the password private key.
  ## key_file_password=/path/key

  # Execute this script to produce the private key password. This will be used when 'key_file_password' is not set.
  ## key_file_password_script=/path/pwd.sh

  # A mapping from attributes in the response from the IdP to django user attributes.
  ## user_attribute_mapping={'uid': ('username', )}

  # Have Hue initiated authn requests be signed and provide a certificate.
  ## authn_requests_signed=false

  # Have Hue initiated logout requests be signed and provide a certificate.
  ## logout_requests_signed=false

  # Username can be sourced from 'attributes' or 'nameid'.
  ## username_source=attributes

  # Performs the logout or not.
  ## logout_enabled=true


###########################################################################
# Settings to configure OpenID
###########################################################################

[libopenid]
  # (Required) OpenId SSO endpoint url.
  ## server_endpoint_url=https://www.google.com/accounts/o8/id

  # OpenId 1.1 identity url prefix to be used instead of SSO endpoint url
  # This is only supported if you are using an OpenId 1.1 endpoint

  # Create users from OPENID on login.
  ## create_users_on_login=true

  # Use email for username
  ## use_email_for_username=true


###########################################################################
# Settings to configure OAuth
###########################################################################

[liboauth]
  # NOTE:
  # To work, each of the active (i.e. uncommented) service must have
  # applications created on the social network.
  # Then the "consumer key" and "consumer secret" must be provided here.
  #
  # The addresses where to do so are:
  #
  # Additionnaly, the following must be set in the application settings:
  # Twitter:  Callback URL (aka Redirect URL) must be set to http://YOUR_HUE_IP_OR_DOMAIN_NAME/oauth/social_login/oauth_authenticated
  # Google+ : CONSENT SCREEN must have email address
  # Facebook: Sandbox Mode must be DISABLED
  # Linkedin: "In OAuth User Agreement", r_emailaddress is REQUIRED

  # The Consumer key of the application
  ## consumer_key_twitter=
  ## consumer_key_google=
  ## consumer_key_facebook=
  ## consumer_key_linkedin=

  # The Consumer secret of the application
  ## consumer_secret_twitter=
  ## consumer_secret_google=
  ## consumer_secret_facebook=
  ## consumer_secret_linkedin=

  # The Request token URL
  ## request_token_url_twitter=https://api.twitter.com/oauth/request_token
  ## request_token_url_google=https://accounts.google.com/o/oauth2/auth
  ## request_token_url_linkedin=https://www.linkedin.com/uas/oauth2/authorization
  ## request_token_url_facebook=https://graph.facebook.com/oauth/authorize

  # The Access token URL
  ## access_token_url_twitter=https://api.twitter.com/oauth/access_token
  ## access_token_url_google=https://accounts.google.com/o/oauth2/token
  ## access_token_url_facebook=https://graph.facebook.com/oauth/access_token
  ## access_token_url_linkedin=https://api.linkedin.com/uas/oauth2/accessToken

  # The Authenticate URL
  ## authenticate_url_twitter=https://api.twitter.com/oauth/authorize
  ## authenticate_url_facebook=https://graph.facebook.com/me?access_token=

  # Username Map. Json Hash format.
  # Replaces username parts in order to simplify usernames obtained
  # Example: {"@sub1.domain.com":"_S1", "@sub2.domain.com":"_S2"}
  # converts 'em...@sub1.domain.com' to 'email_S1'
  ## username_map={}

  # Whitelisted domains (only applies to Google OAuth). CSV format.
  ## whitelisted_domains_google=


###########################################################################
# Settings to configure Metadata
###########################################################################

[metadata]
  # For metadata tagging and enhancement features

  [[optimizer]]
    # Cache timeout in milliseconds for the Optimizer metadata used in assist, autocomplete, etc.
    # defaults to 432000000 (5 days), set to 0 to disable caching
    # cacheable_ttl=432000000

    # For SQL query and table analysis
    # Base URL to Optimizer API.
    # The name of the product or group which will have API access to the emails associated with it.
    ## product_name=hue
    # A secret passphrase associated with the productName
    ## product_secret=hue
    # Execute this script to produce the product secret. This will be used when 'product_secret' is not set.
    ## product_secret_script=

    # The email of the Optimizer account you want to associate with the Product.
    ## email=h...@gethue.com
    # The password associated with the Optimizer account you to associate with the Product.
    ## email_password=hue
    # Execute this script to produce the email password. This will be used when 'email_password' is not set.
    ## password_script=

    # In secure mode (HTTPS), if Optimizer SSL certificates have to be verified against certificate authority.
    ## ssl_cert_ca_verify=True

  [[navigator]]
    # For tagging tables, files and getting lineage of data.
    # Navigator API URL (without version suffix)
    ## api_url=http://localhost:7187/api

    # Navigator API HTTP authentication username and password
    # Override the desktop default username and password of the hue user used for authentications with other services.
    # e.g. Used for LDAP/PAM pass-through authentication.
    ## auth_username=hue
    ## auth_password=

    # Execute this script to produce the auth password. This will be used when `auth_password` is not set.
    ## auth_password_script=

    # Perform Sentry privilege filtering.
    # Default to true automatically if the cluster is secure.
    ## apply_sentry_permissions=False

    # Max number of items to fetch in one call in object search.
    ## fetch_size_search=450

    # Max number of items to fetch in one call in object search autocomplete.
    ## fetch_size_search_interactive=450



Enter 23/Aug/2017 10:24:16 +1000] connectionpool DEBUG    "localhost:11000 GET /oozie/v1/admin/configuration?timezone=Australia%2FCanberra&doAs=mapr HTTP/1.1" 200 None
[23/Aug/2017 10:24:16 +1000] resource     DEBUG    GET Got response: {"oozie.email.smtp.auth":"false","oozie.service.ELService.functions.coord-job-submit-data":"\n            coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo,\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,\n            coord:name=org.apache.ooz...
[23/Aug/2017 10:24:16 +1000] connectionpool DEBUG    "tstapp263vs.test.act.gov.au:14000 PUT /webhdfs/v1/oozie/workspaces/hue-oozie-1503447778.24/hive-565a.sql?permission=0644&op=CREATE&user.name=mapr&overwrite=true&doas=mapr HTTP/1.1" 307 0
[23/Aug/2017 10:24:16 +1000] connectionpool DEBUG    "tstapp263vs.test.act.gov.au:14000 PUT /webhdfs/v1/oozie/workspaces/hue-oozie-1503447778.24/hive-565a.sql?op=CREATE&doas=mapr&data=true&user.name=mapr&permission=0644&overwrite=true HTTP/1.1" 201 0
[23/Aug/2017 10:24:16 +1000] resource     DEBUG    PUT Got response: 
[23/Aug/2017 10:24:16 +1000] submission2  DEBUG    Created/Updated /oozie/workspaces/hue-oozie-1503447778.24/hive-565a.sql
[23/Aug/2017 10:24:16 +1000] middleware   INFO     Processing exception: u'hive2': Traceback (most recent call last):
  File "/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/handlers/base.py", line 112, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/transaction.py", line 371, in inner
    return func(*args, **kwargs)
  File "/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/decorators.py", line 113, in decorate
    return view_func(request, *args, **kwargs)
  File "/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/decorators.py", line 75, in decorate
    return view_func(request, *args, **kwargs)
  File "/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/views/editor2.py", line 668, in submit_coordinator
    job_id = _submit_coordinator(request, coordinator, mapping)
  File "/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/views/editor2.py", line 694, in _submit_coordinator
    wf_dir = Submission(request.user, wf, request.fs, request.jt, mapping, local_tz=coordinator.data['properties']['timezone']).deploy()
  File "/opt/mapr/hue/hue-3.12.0/desktop/libs/liboozie/src/liboozie/submission2.py", line 249, in deploy
    oozie_xml = self.job.to_xml(self.properties)
  File "/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/models2.py", line 451, in to_xml
    'workflow_mapping': workflow_mapping
  File "/opt/mapr/hue/hue-3.12.0/desktop/core/src/desktop/lib/django_mako.py", line 114, in render_to_string_normal
    result = template.render(**data_dict)
  File "/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/template.py", line 443, in render
    return runtime._render(self, self.callable_, args, data)
  File "/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 786, in _render
    **_kwargs_for_callable(callable_, data))
  File "/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 818, in _render_context
    _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
  File "/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 844, in _exec_template
    callable_(context, *args, **kwargs)
  File "/tmp/tmpN77t2C/oozie/editor2/gen/workflow.xml.mako.py", line 80, in render_body
    credential = mapping['credentials'][cred_type]
KeyError: u'hive2'




awi...@maprtech.com

unread,
Aug 24, 2017, 6:15:03 PM8/24/17
to Hue-Users
@Romain

Please advice
HUE.ini


  ## django_server_email='hue@localhost.localdomain'
...

awi...@maprtech.com

unread,
Aug 24, 2017, 6:23:40 PM8/24/17
to Hue-Users
Sorry

The error block was missing here

Adding it below 
Enter code here..
[24/Aug/2017 15:21:42 -0700] middleware   INFO     Processing exception: Workflow submission failed: Traceback (most recent call last):
  File "/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/handlers/base.py", line 112, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/transaction.py", line 371, in inner
    return func(*args, **kwargs)
  File "/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/decorators.py", line 113, in decorate
    return view_func(request, *args, **kwargs)
  File "/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/decorators.py", line 75, in decorate
    return view_func(request, *args, **kwargs)
  File "/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/views/editor2.py", line 373, in submit_workflow
    return _submit_workflow_helper(request, workflow, submit_action=reverse('oozie:editor_submit_workflow', kwargs={'doc_id': workflow.id}))
  File "/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/views/editor2.py", line 411, in _submit_workflow_helper
    raise PopupException(_('Workflow submission failed'), detail=smart_str(e))
PopupException: Workflow submission failed
[24/Aug/2017 15:21:42 -0700] exceptions_renderable ERROR    Potential trace: [('/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/views/editor2.py', 409, '_submit_workflow_helper', 'job_id = _submit_workflow(request.user, request.fs, request.jt, workflow, mapping)'), ('/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/views/editor2.py', 442, '_submit_workflow', 'job_id = submission.run()'), ('/opt/mapr/hue/hue-3.12.0/desktop/libs/liboozie/src/liboozie/submission2.py', 50, 'decorate', 'deployment_dir = self.deploy()'), ('/opt/mapr/hue/hue-3.12.0/desktop/libs/liboozie/src/liboozie/submission2.py', 249, 'deploy', 'oozie_xml = self.job.to_xml(self.properties)'), ('/opt/mapr/hue/hue-3.12.0/apps/oozie/src/oozie/models2.py', 451, 'to_xml', "'workflow_mapping': workflow_mapping"), ('/opt/mapr/hue/hue-3.12.0/desktop/core/src/desktop/lib/django_mako.py', 114, 'render_to_string_normal', 'result = template.render(**data_dict)'), ('/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/template.py', 443, 'render', 'return runtime._render(self, self.callable_, args, data)'), ('/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py', 786, '_render', '**_kwargs_for_callable(callable_, data))'), ('/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py', 818, '_render_context', '_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)'), ('/opt/mapr/hue/hue-3.12.0/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py', 844, '_exec_template', 'callable_(context, *args, **kwargs)'), ('/tmp/tmpsAEVxm/oozie/editor2/gen/workflow.xml.mako.py', 80, 'render_body', "credential = mapping['credentials'][cred_type]")]
[24/Aug/2017 15:21:42 -0700] exceptions_renderable ERROR    Potential detail: u'hive2'
[24/Aug/2017 15:21:42 -0700] submission2  DEBUG    Created/Updated /oozie/workspaces/hue-oozie-1503514503.69/hive-f254.sql
[24/Aug/2017 15:21:42 -0700] resource     DEBUG    PUT Got response: 
[24/Aug/2017 15:21:42 -0700] connectionpool DEBUG    "localhost:14000 PUT /webhdfs/v1/oozie/workspaces/hue-oozie-1503514503.69/hive-f254.sql?op=CREATE&user.name=mapr&overwrite=true&data=true&permission=0644&doas=mapr HTTP/1.1" 201 0
[24/Aug/2017 15:21:42 -0700] connectionpool DEBUG    "localhost:14000 PUT /webhdfs/v1/oozie/workspaces/hue-oozie-1503514503.69/hive-f254.sql?permission=0644&op=CREATE&user.name=mapr&overwrite=true&doas=mapr HTTP/1.1" 307 0
[24/Aug/2017 15:21:42 -0700] resource DEBUG GET Got response: {"oozie.service.PurgeService.purge.limit":"100","oozie.service.ELService.latest-el.use-current-time":"false","oozie.service.AbandonedCoordCheckerService.job.older.than":"2880","oozie.delete.runtime.dir.on.shutdown":"true","oozie.service.ELService.constants.coord-sla-create":" ","oozie.coord.action.get.all.attributes":"false","oozie.service.coord.normal.default.timeout":"120","oozie.service.ELService.functions.coord-sla-create":"\n coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut,\n coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime,\n coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actualTime,\n coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset,\n coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset,\n .



On Wednesday, August 23, 2017 at 11:11:41 AM UTC-7, awi...@maprtech.com wrote:



HUE.ini


  ## django_server_email='hue@localhost.localdomain'
...
Reply all
Reply to author
Forward
0 new messages