Improving Trust and Accountability in AI Systems through Technological Era Advancement for Decision Support in Indonesian Manufacturing Companies

This study explores how technological developments in Artificial Intelligence (AI) decision support systems within Indonesian manufacturing organizations interact with the intricate dynamics of trust, accountability, and technology. The study employed a cross-sectional quantitative research approach to gather responses from a representative sample of professionals spanning different organizational levels, age groups, and functions. The results show that there is a high degree of trust in AI systems, which is largely impacted by dependability and transparency. Strong perceived accountability frameworks encourage prudent decision-making. Technological developments have a big impact on trust and responsibility, especially in Explainable AI and bias prevention. A nuanced interpretation is ensured by the study's demographic analysis, which provides practitioners and policymakers with practical insights to support ethical AI integration in Indonesia's industrial sector.


INTRODUCTION
The integration of Artificial Intelligence (AI) in the manufacturing industry has indeed brought about significant changes in decision-making processes, leading to optimized operations, increased productivity, and overall growth.However, as you rightly pointed out, this transformative technology also presents a set of challenges, particularly in establishing trust and accountability in AI systems [1], [2].AI has the potential to improve various environments and processes, including the manufacturing sector.It can enhance decision-making processes by recognizing patterns, understanding languages, perceiving relationships and connections, and following decision algorithms proposed by experts.AI can also improve itself by integrating new experiences and solving problems or performing tasks [3].
In the realm of manufacturing, AI can be used to detect and predict anomalies in production lines, thereby enhancing productivity and throughput [4].For instance, a smart system that integrates data collected from a deep learning module with a machine learning module can detect defects, categorize them, and use this knowledge to enhance the quality of subsequent parts [5]- [7].This system can lead to higher production rates of acceptable products and lower scrap rates or rework [8].However, the integration of AI in decision-making processes also brings about challenges.One of the main challenges is the ethical issues that arise as businesses use AI to its full potential.Factors such as algorithmic bias, data privacy concerns, and the demand for human oversight must be carefully considered [9].Moreover, AI systems in the workplace increasingly substitute for employees' tasks, responsibilities, and decision-making.Consequently, employees must relinquish core activities of their work processes without the ability to interact with the AI system, which can affect their professional role identity [10].As Indonesian manufacturing companies progressively embrace AI for decision support, it becomes imperative to address the growing concerns related to trustworthiness and accountability.
Trust is a cornerstone in the successful adoption of AI; employees and stakeholders must have confidence in the decisions made by AI systems to fully leverage their potential.Simultaneously, accountability mechanisms are crucial to ensure that these AI-driven decisions are transparent, fair, and justifiable.
Trust is a cornerstone in the successful adoption of AI.Employees and stakeholders must have confidence in the decisions made by AI systems to fully leverage their potential.One way to increase trust in AI systems is by quantifying uncertainty in AI predictions.This can be achieved by distinguishing uncertainties in predicting AI methods used in various applications [11].Trust can also be fostered by ensuring that roles within the AI system have the right attributes for trustworthiness [12].
Transparency in AI-driven decisions is crucial for their acceptance.AI-driven HRM, for instance, has potential ambiguities that might support sustainable company development or prevent AI application: job design, transparency, performance, and data ambiguity [13].Transparency in AI systems can also be enhanced by using a holistic framework for AI systems in industrial applications [14].
Accountability mechanisms are crucial to ensure that AI-driven decisions are justifiable.AI systems should be designed in a way that they can be held accountable for their actions.For instance, in the field of healthcare, AI cannot be held liable for flawed decisions.However, from a multi-agent systems viewpoint, 'trust' requires all the environmental, psychological, and technical conditions being responsive to patient safety [15].
Fairness in AI systems is another important factor.For instance, perceptions of procedural fairness significantly mediate the relation between non-financial measure and budget gaming in manufacturing companies [16].Fairness perceptions on performance evaluation systems are associated with employee outcomes and have yet to be associated with the budget gaming behaviour [16].Ethical considerations in AI adoption are also crucial.AI systems should be designed and used in a way that respects ethical principles.For instance, in AI-driven healthcare, 'responsibility', 'accountability', 'privacy', 'transparency; and 'fairness' need to be secured for all the parties involved in AI-driven healthcare, given the ethical and legal concerns and their threat to the trust [15].
Despite the evident advantages of integrating AI into manufacturing decision processes, issues surrounding trust and accountability persist.Transparency gaps, biases, and a lack of clear accountability frameworks pose potential obstacles to the widespread acceptance and effective utilization of AI systems.These challenges are further compounded in the context of Indonesian manufacturing, where unique cultural, organizational, and regulatory factors may influence the dynamics of trust and accountability in AI decision support.This research aims to delve into these challenges, seeking to understand the current landscape of trust in AI systems within Indonesian manufacturing companies and assess the perceived accountability of these systems in decision-making processes.Moreover, the study seeks to explore how advancements in technology can be harnessed to address these challenges and ultimately enhance the trustworthiness and accountability of AI in the manufacturing sector.

LITERATURE REVIEW 2.1 Trust in AI Systems
For AI systems to be successfully integrated into decision-making processes, trust is a complex idea.According to [17]- [19], and other scholars, the core components impacting confidence in AI include transparency, explainability, reliability, and fairness.In order to solve the "black box" problem, transparency entails making the decision-making process of AI understandable to people.Explainability is the system's capacity to communicate its choices in a way that is intelligible to humans.Consistent performance is guaranteed by reliability, and equitable results across various user groups are required by fairness.
Prior research has demonstrated that an absence of explainability and openness might impede trust and breed skepticism [20], [21].It is crucial to comprehend the cultural factors influencing trust in the context of Indonesian manufacturing enterprises.Understanding the dynamics of trust in hierarchical organizational systems may be possible through the use of Hofstede's cultural dimensions theory, which emphasizes power distance and uncertainty avoidance [20], [22], [23].

Accountability in AI Decision Support
Accountability is critical in ensuring that AI systems are responsible for their decisions.It involves transparency, responsibility, and traceability [24].Transparent decision-making processes enable users to comprehend how decisions are reached, fostering accountability.Responsibility implies a clear identification of the entity or individuals responsible for AI decisions, while traceability ensures that decision-making processes can be audited and understood retroactively [25].
In the manufacturing context, where decisions have direct implications on operational efficiency and safety, accountability becomes paramount.Existing literature provides insights into accountability frameworks [26] and the legal and ethical considerations surrounding AI accountability [27], [28].However, the applicability of these frameworks to the unique organizational structures and regulatory environments of Indonesian manufacturing companies remains an underexplored area.

Impact of Technological Advancements
Recent technological advancements offer promising avenues to address trust and accountability challenges in AI systems.Explainable AI (XAI) techniques, including interpretable machine learning models and model-agnostic approaches, aim to demystify the decision-making process [29], [30].Bias mitigation strategies, such as fairness-aware algorithms, strive to eliminate biases in AI-driven decisions [31], [32].Furthermore, advancements in federated learning and privacy-preserving AI mechanisms address concerns related to data privacy and security in decision support systems [33]- [35].
While these advancements demonstrate potential, their effectiveness in the Indonesian manufacturing context needs exploration.Cultural factors, data availability, and the socio-economic landscape may influence the applicability and success of these technologies in addressing trust and accountability concerns specific to Indonesian manufacturing companies.

Using a cross-sectional quantitative research design, this study seeks to understand how
Indonesian manufacturing company employees perceive trust and accountability in AI systems.
Because of its effectiveness in gathering data at a particular moment in time, the cross-sectional design was selected to allow for a thorough examination of the present level of accountability and confidence in AI decision support systems.A structured questionnaire will be distributed as part of the research design in order to collect participant replies.The purpose of the questionnaire is to gather data on perceived accountability, influence of technical breakthroughs, and faith in AI systems.

Sampling
Employees in Indonesian manufacturing enterprises at various levels make up the study's target group.To ensure representation from a range of departments, hierarchies, and functions, a stratified random sample method will be employed, taking into account the diversity of positions found in manufacturing businesses.By using this method, the sample is guaranteed to accurately represent the diversity of the manufacturing workforce.After discussing potential outlier data, 246 of the 300 questionnaires that were initially given were completed, yielding 246 samples for this investigation.

Data Collection
A structured questionnaire that has been meticulously crafted to capture the subtleties of trust, accountability, and the influence of technology breakthroughs on artificial intelligence systems in a manufacturing setting will be used to gather data.To evaluate the questionnaire's validity, reliability, and clarity, a small sample of the target population will complete it beforehand.Before the questionnaire is made more widely available, input from the pre-test will be utilized to make necessary revisions.
To ensure effective data collection, the questionnaire will be distributed electronically.Since participants will be assured of the security and anonymity of their answers, candid and open comments will be encouraged.

Data Analysis
The Statistical Package for the Social Sciences, or SPSS 25, was used to evaluate the quantitative data gathered from the survey.Both descriptive and inferential statistical techniques were used in the analysis.The principal characteristics of the data collection will be compiled and presented using descriptive statistics, such as means, standard deviations, and frequency distributions.This gives a concise summary of the answers.We'll look at the links between the variables using inferential statistical techniques like regression and correlation analysis.Regression analysis will evaluate the effect of technology improvements on trust and accountability, while correlation analysis will show the direction and intensity of the link.

Demographic Characteristics
A quick rundown of the respondents' demographics is necessary before getting into the findings and analysis.25% of the respondents are in the [18][19][20][21][22][23][24][25]  as female, the gender distribution shows balance and promotes inclusivity in the study's findings.10% only completed high school, 60% have a bachelor's degree, 25% have a master's degree, and 5% have a doctorate.This shows the wide range of educational backgrounds.
Thirty percent of respondents work in production/operations, fifteen percent in research and development, twenty percent in sales and marketing, ten percent in human resources, fifteen percent in information technology, and ten percent in other designated areas within firms.
There is variation in the amount of experience; 5% have less than a year, 25% have one to five years, 30% have six to ten years, 20% have eleven to fifteen years, and 20% have sixteen years and more.The distribution of respondents reflects organizational diversity: 35% are Entry Level/Staff, 40% are Mid-Level Management, 20% are Senior Management, and 5% are Executive/Leadership.

Trust in AI Systems
Fascinating insights were uncovered from the examination of replies pertaining to trust in AI systems.On a scale of 1 to 5, participants ranked different elements that contribute to trust; 5 denotes strong trust.The mean scores for each trust component are listed in Table 1.

Trust Factor
Mean Score (Out of 5) According to the analysis, respondents had a high degree of general trust in AI systems, with transparency (Mean = 4.2) and reliability (Mean = 4.1) having the greatest effects.This shows that people have a favorable opinion of AI systems' consistent performance and their ability to make decisions in an intelligible manner.

Perceived Accountability
Perceived accountability analysis sheds light on participants' perceptions of AI systems' accountability and transparency in decision-making.The main conclusions are outlined in Table 2.
of the author's data analysis(2023)

West Science Interdisciplinary Studies
 Vol.01, No. 10, October and 2023: pp.1019-1027 1020 age range, 40% are in the 26-35 age 25% are in the 36-45 age group, 15% are in the 46-55 age group, and 5% are in the 56 and above age group.The wide range of ages represented here guarantees a thorough examination of viewpoints from various professional phases.With 60% of respondents identifying as male and 40%

Table 2 .
Perceived Accountability in AI Decision Support Source: Results of The Author's Data Analysis(2023)