Create Maintenance Notification for SAP S/4HANA using IBM Watson and SAP services



Use case

To comprehend the benefits of integrating SAP and IBM Watson, you will look at a use case.

One day, while a service engineer was working outside, he heard a machine making noise. Due to obstructions caused by tools, it was difficult to input a long text report for repair on the website and he had to go back to his office. While filling out the form, he needed to find the machine among others in the database and remember other details, such as the malfunction status , which could result in inaccurate and insufficient input. In the worst case, he might forget to submit the report by reason of other work that needed to be done. If he had used an application that combined the services of SAP and IBM Watson, the situation would have been different.

The app has the following advantages.

  • Anyone can report a problem as soon as they discover it.
  • The location is defined by geolocation.
  • No typing is required for the problem description (Hands are free).
  • The phone helps find the correct technical object.
  • Problems can be documented with text and images.
  • Only a mobile phone is required, and not a desktop PC.

Main process



Solutions Architecture

The SAP Build Apps, formerly known as SAP AppGyver, is a Low-Code/No-Code(LCNC) service used as a View layer. Kyma is an orchestration service for developing, operating, and managing cloud-native application runtime combining with Kubernetes. It runs a Node.js package within a Docker container in  the Kubernetes cluster. The Node.js consists of three layers: proxy, application, and target. In the proxy layer, data received from SAP Build Apps is passed to the application layer as a proxy endpoint. The application layer then implements the business logic. Finally, the target layer acts as a target endpoint and accesses the APIs of various services after receiving data from the application layer.

Proxy architecture


IBM Watson Assistant

IBM Watson Assistant enables natural conversations with computers through the use of AI to understand message intent. This provides increased efficiency and accuracy through data storage.

The assistant is composed of Actions and Steps. An Action is a section of dialogue between the assistant and user that aim to solve a problem or complete a task. In the demo, “Create Maintenance Notification” is an Action that resolves an issue, ‘Report a machine problem’. Since multiple Actions can be created, the assistant needs to identify which Action the user wants to perform by telling the assistant a clicked component ID.



A Step refers to a single dialog between the assistant and the user within an Action. Steps are combined to solve a problem and can have functions such as deciding response types, variable updates, validation, and conditional branching. Conditional branching can be used to ask the user a question if the assistant does not understand the response, or to determine whether to switch from a Maintenance Notification request to a Maintenance Order one when “Very high” is selected as the maintenance priority Step.


Create Steps in the Action

In this use case, two response types are used: Options and Free text. While typing is used on Action development in IBM Watson Assistant as a Free text value, IBM Watson Speech to Text is used during application development in SAP Build Apps.

The chatbot flow can be tested by clicking the Preview button.

Further details


IBM Watson Speech to Text

IBM Watson Speech to Text recognizes spoken language and transcribe it into text with a high degree of accuracy. It is compatible with various audio file types and can support many languages. Moreover, the model can be customized to meet specific needs for different use cases. In addition to these features, it also offers speaker diarization, which can distinguish between multiple speakers. It is particularly useful for transcribing conversations or discussions among several people.

// src/watson/s2t/target/apis.js
  async convertWavToText(wav) {
    return await speechToText
        audio: wav,
        contentType: "audio/wav",
        model: "en-US_BroadbandModel",
        backgroundAudioSuppression: 0.2,
        keywords: [
          // ...
        keywordsThreshold: 0.5,
      // ...

Advance: By collecting commonly used words from the SAP module using SAP S/4HANA APIs, formatting them into appropriate data formats, registering them as keywords, and setting appropriate threshold, the accuracy of speech recognition can be improved.


Core parts

Define Step class

SAP Build Apps is a LCNC platform built on React Native. It enables the definition of a class and the creation of the instance, similar to any regular programming language. You create a Step class for a dialog component that includes a question from the AI and an answer from the user. Within the Step instance, the information received from the IBM Watson Assistant is used to modify the styling and other properties of the component.

Provide candidate Technical Objects

By calculating the distance between the current location obtained through SAP Build Apps and the location of each Technical Object registered in the SAP S/4HANA database, the system can generate a list of candidate objects sorted by proximity. Thanks to that, the user can find the desired object from many objects easily.

// src/watson/assistant/application/calculateMeters.js
const calculateMeters = (currentLat, currentLon, targetLat, targetLon) => {
  const EARTH_RADIUS = 6371e3;
  const φ1 = (currentLat * Math.PI) / 180;
  const φ2 = (targetLat * Math.PI) / 180;
  const Δφ = ((targetLat - currentLat) * Math.PI) / 180;
  const Δλ = ((targetLon - currentLon) * Math.PI) / 180;

  const a =
    Math.sin(Δφ / 2) * Math.sin(Δφ / 2) +
    Math.cos(φ1) * Math.cos(φ2) * Math.sin(Δλ / 2) * Math.sin(Δλ / 2);
  const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
  const metres = Math.round(EARTH_RADIUS * c);
  return metres;

Haversine formula can be used to calculate the distance between two points on a sphere. Although the Earth is not a perfect sphere, the error is negligible for this case.

Describe long text information by speech

By pressing the record button and explaining the situation verbally, the user avoids lengthy text inputs. If the converted text differs from what was actually spoken, the user can click the re-record icon and try again.

The workflow

Having a larger file size is a disadvantage, even though it allows for higher sound quality and longer recording time. However, in this use case, recordings are rarely longer than one minute. Therefore, it’s better to prioritize high audio quality since speech recognition accuracy is critical. At first, you use the Start recording audio and Stop recording audio functions of SAP Build Apps. These functions create a temporary local audio file with like an AAC extension, which can then be converted to Base64 format using the Convert to Base64 function before being sent to the proxy.

The arguments of the start recording audio:

Input arguments Description Example Values
Sample rate Sample rate for the recording, in kHz. 44100
Channels Record in one channel (mono) or two (stereo). 1
Bits per sample How many bits per sample should be stored? 24

The proxy code:

// src/watson/s2t/application/convertBase64ToWav .js
const { Readable } = require("stream");

const convertReadableToBufferList = require("./convertReadableToBufferList");

// Base64 => Buffer => Readable Stream => Writable Stream => Buffer[] => wav: Buffer
const convertBase64ToWav = async (base64) => {
  const buffer = Buffer.from(base64, "base64");
  const readableStream = new Readable();
  const bufferList = await convertReadableToBufferList(readableStream);
  const wav = Buffer.concat(bufferList);
  return wav;

To maintain optimal audio quality, it is recommended to use a WAV buffer, which is not a lossy compression format, as the input for IBM Watson Speech to Text in this case. Therefore, it is necessary to convert various MIME types such as audio/mp4 and audio/x-hx-aac-adts, which are adopted based on the recording settings of each device, to audio/wav. You have to convert readable stream to buffer array and writable stream is used through the process.

// src/watson/s2t/application/convertReadableToBufferList.js
const ffmpeg = require("fluent-ffmpeg");
const ffmpegPath = require("@ffmpeg-installer/ffmpeg").path;

const convertReadableToBufferList = async (readableStream) => {
  const writableStream = ffmpeg(readableStream).noVideo().format("wav").pipe();
  const bufferList = [];
  writableStream.on("data", (d) => {
  await new Promise((resolve, reject) => {
      .on("end", () => {
        setTimeout(() => {
        }, 100);
      .on("error", (err) => {
  return bufferList;

The ‘data’ event is triggered whenever a stream sends a chunk of data to a consumer, for example, when the stream pipe function is called. However, sometimes the end event is emitted and the “Output stream closed” error message is caught before all the split buffers are pushed to bufferList. To avoid this error, wait for 100ms using setTimeout after the end event is triggered.


Create Maintenance Request

The selected Options on SAP Build Apps need to be mapped because they are not appropriate values for sending to SAP S/4HANA.

// src/integrationSuite/target/mapping.js
const mappingList = {
  MaintNotificationCode: {
    "Erratic output": "ERO",
    "Insufficient power": "POW",
    "Load drop": "LOA",
    Noise: "NOI",
    Overheating: "OHE",
    Vibration: "VIB",
    Other: "OTH",
    Unknown: "UNK",
  MaintenanceObjectIsDown: {
    Yes: "True",
    No: "False",
  // ...
const mapping = (body) => {
  for (var reqKey in mappingList) {
    const bodyValue = body[reqKey];
    if (bodyValue && mappingList[reqKey][bodyValue]) {
      body[reqKey] = mappingList[reqKey][bodyValue];
  return body;


The request content includes the ReporterFullName and ReportedByUser values, which are obtained when the user is authenticated on the login page. The response is the Maintenance Notification ID, which is used for the next attachment image.

// src/integrationSuite/target/apis.js
async postMaintNotif(body, proxyRes) {
    const mappedBody = mapping(body);
    return await axios
      .post("/v1/pr/API_MAINTNOTIFICATION/MaintenanceNotification", {
        TechnicalObject: mappedBody.TechnicalObject,
        NotificationText: mappedBody.NotificationText,
        MaintNotificationCode: mappedBody.MaintNotificationCode,
        MalfunctionEffect: mappedBody.MalfunctionEffect,
        MaintenanceObjectIsDown: mappedBody.MaintenanceObjectIsDown,
        MaintNotificationLongText: mappedBody.MaintNotificationLongText,
        MaintNotifLongTextForEdit: mappedBody.MaintNotificationLongText,
        MaintPriority: mappedBody.MaintPriority,
        ReporterFullName: mappedBody.ReporterFullName,
        ReportedByUser: mappedBody.ReportedByUser,
        MalfunctionStartDate: mappedBody.MalfunctionStartDate,
        LocationDescription: mappedBody.LocationDescription,
        RequiredStartDate: mappedBody.RequiredStartDate,
        RequiredEndDate: mappedBody.RequiredEndDate,
        NotificationReferenceDate: mappedBody.NotificationReferenceDate,
        NotificationType: "Y1",
        TechObjIsEquipOrFuncnlLoc: "EAMS_EQUI",
        MaintNotificationCodeGroup: "YB-PMGNL",
      .then((res) => {


Attach images

You can use the camera function on SAP Build Apps to take a picture and send it to the proxy in base64 format. However, the image data can consume a large amount of communication. Therefore, it is necessary to check the library settings. In this case, Express.js is used and the default limit for the request body is 100 KB. If the request body exceeds the limit, a 413 error (‘request entity too large’) will occur.

To solve it, you can configure Express.js as follows,

// src/app.js
const express = require("express");
const app = express();
app.use(express.urlencoded({ extended: true, limit: "100mb" }));

You need to retrieve the values of ‘Cookie’ and ‘X-CSRF-Token’ first because they must be included in the header of the POST request.

// src/s4HANA/target/apis.js
    const res = await axios("/opu/odata/sap/API_CV_ATTACHMENT_SRV", {
      headers: {
        "x-csrf-token": "FETCH",
    const cookie =
      res.headers["set-cookie"][0] + ";" + res.headers["set-cookie"][1];
    const xCSRFToken = res.headers["x-csrf-token"];

You can see the details of the header parameters.

 Header parameters  Description Example values 
SAP Fiori UI shows not DMS(Document Management System) but GOS(Generic Object Services)
The Maintenance Notification ID prefixed with 0 000010000624
The name of the file with extension TechnicalObject1.jpg

The file type of the body must be binary.

// src/s4HANA/target/apis.js
    return await axios
          headers: {
            "Content-Type": "image/jpeg; image/png",
            BusinessObjectTypeName: "PMQMEL",
            LinkedSAPObjectKey: body.linkedSAPObjectKey,
            Slug: body.slug,
            Cookie: cookie,
            "X-CSRF-Token": xCSRFToken,



You have seen how IBM Watson and SAP services can be combined to create Maintenance Notification via speech on a mobile device. This integration can greatly benefit businesses by streamlining their operations. Although Maintenance Notification is demonstrated, this integration can be applied to various business efficiency applications in every module of SAP.

Please share your feedback or thoughts in a comment. Feel free to follow and contact if you have any other use cases or solutions in mind, or if you would like more details.




Thanks to Gunter Albrecht and Mohini Verma for supporting me on this project!



You can access SAP S/4HANA topics from here!

Original Article:

Related blogs


Please enter your comment!
Please enter your name here