HAPI FHIR and SQL server: problems with database case config

11 views
Skip to first unread message

Vincent

unread,
Oct 27, 2025, 5:43:01 AM (11 days ago) Oct 27
to HAPI FHIR
Hello,

I am trying to deploy an HAPI FHIR server plugged with an SQL server behind.

My Yaml config file contains the following: 
#Uncomment the "servlet" and "context-path" lines below to make the fhir endpoint available at /example/path/fhir instead of the default value of /fhir
server:
  #  servlet:
  #    context-path: /example/path
  port: 8080
  tomcat:
    # allow | as a separator in the URL
    relaxed-query-chars: "|"
#Adds the option to go to e.g. http://localhost:8080/actuator/health for seeing the running configuration
management:
  health:
    elasticsearch:
      enabled: false
  #The following configuration will enable the actuator endpoints at /actuator/health, /actuator/info, /actuator/prometheus, /actuator/metrics. For security purposes, only /actuator/health is enabled by default.
  endpoints:
    enabled-by-default: false
    web:
      exposure:
        # expose only health (default) — change to [health,info,prometheus,metrics] if you want them reachable
        include: health
  endpoint:
    info:
      enabled: true
    metrics:
      enabled: true
    health:
      enabled: true
      probes:
        enabled: true
      group:
        liveness:
          include:
            - livenessState
            - readinessState
    prometheus:
      enabled: true
  prometheus:
    metrics:
      export:
        enabled: true
spring:
  ai:
    mcp:
      server:
        name: FHIR MCP Server
        version: 1.0.0
        instructions: "This server provides access to a FHIR RESTful API. You can use it to query FHIR resources, perform operations, and retrieve data in a structured format."
        enabled: true
        streamable-http:
          mcp-endpoint: /mcp/messages

  autoconfigure:
    # This exclude is only needed for setups not using Elasticsearch where the elasticsearch sniff is not needed.
    exclude: org.springframework.boot.autoconfigure.elasticsearch.ElasticsearchRestClientAutoConfiguration
  main:
    allow-bean-definition-overriding: false
    allow-circular-references: true
  flyway:
    enabled: false
    baseline-on-migrate: true
    fail-on-missing-locations: false
  datasource:
    url: 'jdbc:sqlserver://sub.mydomain.fr;databaseName=db;encrypt=false'
    username: sa
    password: ABCD1234abcd
    driver-class-name: com.microsoft.sqlserver.jdbc.SQLServerDriver

    # database connection pool size
    hikari:
      maximum-pool-size: 10
  #  elasticsearch:
  #    uris: http://localhost:9200
  #    username: elastic
  #    password: changeme
  jpa:
    properties:
      hibernate:
        hbm2ddl:
          auto: update
        jdbc:
          batch_size: 20
        cache:
          use_query_cache: false
          use_second_level_cache: false
          use_structured_entries: false
          use_minimal_puts: false
        format_sql: false
        show_sql: false
        #If using H2, then supply the value of ca.uhn.fhir.jpa.model.dialect.HapiFhirH2Dialect
        #If using postgres, then supply the value of ca.uhn.fhir.jpa.model.dialect.HapiFhirPostgresDialect
        dialect: ca.uhn.fhir.jpa.model.dialect.HapiFhirSQLServerDialect
        #physical_naming_strategy:
        search:
          enabled: true
          schema_management:
            strategy: create
          ### lucene parameters
          #backend:
          # type: lucene
          # directory:
          #   type: local-filesystem
          #   root: target/lucenefiles
          # analysis:
          #   configurer: ca.uhn.fhir.jpa.search.HapiHSearchAnalysisConfigurers$HapiLuceneAnalysisConfigurer

hapi:
  fhir:
    ### This flag when enabled to true, will avail evaluate measure operations from CR Module.
    ### Flag is false by default, can be passed as command line argument to override.
    cr:
      enabled: false
      caregaps:
        reporter: "default"
        section_author: "default"
      terminologyServerClientSettings:
        maxRetryCount: 3
        retryIntervalMillis: 1000
        timeoutSeconds: 30
        socketTimeout: 60
      cql:
        use_embedded_libraries: true
        compiler:
          ### These are low-level compiler options.
          ### They are not typically needed by most users.
          # validate_units: true
          # verify_only: false
          # compatibility_level: "1.5"
          error_level: Info
          signature_level: All
          # analyze_data_requirements: false
          # collapse_data_requirements: false
          # translator_format: JSON
          # enable_date_range_optimization: true
          enable_annotations: true
          enable_locators: true
          enable_results_type: true
          enable_detailed_errors: true
          # disable_list_traversal: false
          # disable_list_demotion: false
          # enable_interval_demotion: false
          # enable_interval_promotion: false
          # disable_method_invocation: false
          # require_from_keyword: false
          # disable_default_model_info_load: false
        runtime:
          debug_logging_enabled: false
          # enable_validation: false
          # enable_expression_caching: true
        terminology:
          valueset_preexpansion_mode: REQUIRE # USE_IF_PRESENT, REQUIRE, IGNORE
          valueset_expansion_mode: PERFORM_NAIVE_EXPANSION # AUTO, USE_EXPANSION_OPERATION, PERFORM_NAIVE_EXPANSION
          valueset_membership_mode: USE_EXPANSION # AUTO, USE_VALIDATE_CODE_OPERATION, USE_EXPANSION
          code_lookup_mode: USE_VALIDATE_CODE_OPERATION # AUTO, USE_VALIDATE_CODE_OPERATION, USE_CODESYSTEM_URL
        data:
          search_parameter_mode: USE_SEARCH_PARAMETERS # AUTO, USE_SEARCH_PARAMETERS, FILTER_IN_MEMORY
          terminology_parameter_mode: FILTER_IN_MEMORY #  AUTO, USE_VALUE_SET_URL, USE_INLINE_CODES, FILTER_IN_MEMORY
          profile_mode: DECLARED # ENFORCED, DECLARED, OPTIONAL, TRUST, OFF

    cdshooks:
      enabled: false
      clientIdHeaderName: client_id

    ### This enables the swagger-ui at /fhir/swagger-ui/index.html as well as the /fhir/api-docs (see https://hapifhir.io/hapi-fhir/docs/server_plain/openapi.html)
    openapi_enabled: true
    ### This is the FHIR version. Choose between, DSTU2, DSTU3, R4 or R5
    fhir_version: R4
    ### Flag is false by default. This flag enables runtime installation of IG's.
    ig_runtime_upload_enabled: false
      ### This flag when enabled to true, will avail evaluate measure operations from CR Module.
    allow_external_references: true
    advanced_lucene_indexing: false
    search_index_full_text_enabled: false
    bulk_export_enabled: false
    bulk_import_enabled: false
    #    filter_search_enabled: true
    #    graphql_enabled: true
    narrative_enabled: false
    mdm_enabled: false
    mdm_rules_json_location: "mdm-rules.json"

    logical_urls:

    cors:
      allow_Credentials: true
      allowed_origin:
        - '*'

    # Search coordinator thread pool sizes
    search-coord-core-pool-size: 20
    search-coord-max-pool-size: 100
    search-coord-queue-capacity: 200

    # Search Prefetch Thresholds.

    # This setting sets the number of search results to prefetch. For example, if this list
    # is set to [100, 1000, -1] then the server will initially load 100 results and not
    # attempt to load more. If the user requests subsequent page(s) of results and goes
    # past 100 results, the system will load the next 900 (up to the following threshold of 1000).
    # The system will progressively work through these thresholds.
    # A threshold of -1 means to load all results. Note that if the final threshold is a
    # number other than -1, the system will never prefetch more than the given number.
    search_prefetch_thresholds: 13,503,2003,-1
    retain_cached_searches_mins: 0
    reuse_cached_search_results_millis: 0

    tester:
      home:
        name: Local Tester
        server_address: 'http://localhost:8080/fhir'
        refuse_to_fetch_third_party_urls: false
        fhir_version: R4
      global:
        name: Global Tester
        server_address: "http://hapi.fhir.org/baseR4"
        refuse_to_fetch_third_party_urls: false
        fhir_version: R4
    inline_resource_storage_below_size: 4000

I am running the SQL server instance as a Docker container, with this command:
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=ABCD1234abcd" -e "MSSQL_PID=Evaluation" -p 1433:1433  --name sqlpreview --hostname sqlpreview -d mcr.microsoft.com/mssql/server:2019-latest

When I start the FHIR server, the database is created automatically, with tables in lowercase. 
But when the app tries to request data to the database, it fails with an error message letting me think that the lowercase naming of tables is possibly a concern: 
ca.uhn.fhir.rest.server.exceptions.InternalErrorException: HAPI-1262: org.hibernate.exception.SQLGrammarException: JDBC exception executing SQL [SELECT t0.RES_ID FROM HFJ_RESOURCE t0 WHERE ((t0.RES_TYPE = ?) AND (t0.RES_DELETED_AT IS NULL)) order by RES_ID offset 0 rows fetch first ? rows only] [Invalid object name 'HFJ_RESOURCE'.]

But for me, all theses configs are correct:
  • spring.datasource including driverClassName
  • spring.jpa.dialect
And, for the database config, I followed the readme available at GitHub - hapifhir/hapi-fhir-jpaserver-starter and my database server has the Latin1_General_100_CS_AS collation (case sensitive).

So, I'm wondering what I am doing wrong... If anyone has an idea... 

Thanks a lot in advance!


Vincent

Reply all
Reply to author
Forward
0 new messages