The application of algorithmic cognitive decision trust modeling for cyber security within organisations

Cybercrime continues to cause increasing threat to business processes, eroding stakeholders’ trust in internet technologies. In this research paper, we explore how six dominant algorithmic trust positions facilitate cognitive processing, which, in turn, can influence an organisation’s productivity and align its values and support structures for combating cybercrimes. This conceptual paper uses a cognitive perspective described as a Throughput Model. This modeling perspective captures several dominant algorithmic trust positions for organisations, providing a new and powerful approach which seeks to enhance our understanding of the cognitive representation of decision-making processes. These trust positions are (1) rational-based trust, (2) rule-based trust, (3) category-based trust, (4) third-party based trust, (5) role-based trust, and (6) knowledge-based trust. Finally, we provide conclusions and implications for future research.


INTRODUCTION
One of the major concerns for managers is the threat from cybercrime that influences trust systems in organisations [1,2].Thus, organizations have built artificial intelligence systems to use human reasoning as a model to solve fraudulent problems [3].Fraud is an intentional dishonesty that harms a person or organisation by causing an economic loss and/or the individual(s) responsible to realise a gain [4,5].Risk refers to the possibility of loss, which arises because of uncertainties or our inability to foresee the future [5][6][7].This study uses a cognitive decision-modeling approach that allows for the examination of individual algorithmic pathway levels.Decision-making is the process by which we utilise our perceptions and information in order to form judgments to make choices to accomplish our goals [8].
Recent research has confirmed that people vary in the degree to which they form normative judgments and preferences on thinking bias tasks [9][10][11] The work of Tombu and Mandel [23] has demonstrated that the way people perceive cognitive filters, such as decision heuristics, can influence information.That is, when confronted with an expected loss and a choice between a sure option and a risky option, the gain-loss framing of the problem has been shown to influence option preference.With regards to the prospect theory, this framing effect is the consequence of contradictory attitudes pertaining to risks involving gains and losses.
Building on this seminal work, Culbertson and Rodgers, [12], Rodgers [13], and Foss [14,15], and Rodgers and Al Fayi [9] found that by implementing a Throughput Modeling approach, it was possible to represent risky decision making as including perception (P), information (I), judgment (J), and decision choice (D).The Throughput Model assumes that information inputs pass through the cognitive filters of perception and judgment before decision choices are made (see Figure 1).
- -------------------------------Insert Figure 1 about here  --------------------------------In addition, this research paper utilizes propositions to suggest a link between concepts, which suggest promising areas of inquiry for researchers.Further, we use propositions to spur further research on several "trust questions," especially as it relates to artificial intelligence, in hopes that further evidence or experimental methods will be discovered that will make testable hypotheses.Finally, propositions serve as a common assumption that can support further speculation.This can occur in extremely complex artificial intelligence algorithms, such as those dealt with by sociology and economics of artificial intelligence impact on users, where an experimental test would be prohibitively expensive or difficult [28].
Furthermore, the Throughput Model advances six distinct algorithmic pathways tied to six dominant trust positions [16,17].Thus these algorithms are part of an artificial intelligence model (i.e., Throughput Model), which allows us to find solutions to a problem [18].These trust positions tied to the Throughput Model are (1) rational-based trust (PD), (2) rule-based trust (PJD), (3) category-based trust (IJD), (4) third-party-based trust (IPD), (5) role-based trust (PIJD), and (6) knowledge-based or historical/dispositional trust (IPJD) [4,9,[19][20][21].In sum, these algorithms provide a sequence of steps implemented to solve a problem.The sequence offers a unique way of addressing an issue by delivering a particular solution.Based on Figure 1.1, we can establish six general pathways that can be applied to the six dominate trust positions below: This research revealed that the resulting model was applicable across a wide range of general business decision-making contexts.Moreover, this line of research was expanded to incorporate risky decision-making activities along with "trust" and "ethical" positions [4,9,20].In light of this, this paper proposes a Throughput Model that draws from computer science, economic and psychology literatures to model a perceptual and judgmental process whereby trust might be implemented to reduce fraud and risks [6,20] (see Figure 2). - Prospect theory offers an elegant account of the perception framing effect.We add to the literature by asserting that there are six dominant algorithmic pathways to a decision choice that allows for greater potential in terms of examining how risk attitudes are assessed in riskychoice framing problems.Some studies questioned the generalisability of the framing effect due to predictable eliminations and reversals of the framing effect [22,23].In other words, findings that cannot be accommodated by the explanation that preference reversals (i.e., framing effects) are mediated by concomitant reversals of risk attitudes.
This conceptual research paper embeds trust positions in the Throughput Model based on two types of process errors.The type 1 process error is where decision makers are expected to avoid the risk in a risky decision-making situation or intervene actively in an alternative with the help of a risk-defusing action.The type 2 process error is where the decision maker can select a less risky alternative (passive risk avoidance) [24].Dual process theories of cognitive processing distinguish unconscious, emotional, intuitive and effortless (Type I processing) with conscious, controlled and effortful characteristics (Type 2) (e.g., [25]; [26]) The type 1 error process represents a rejection of individuals who should be admitted from entering a system (e.g., accounting/auditing/information system) or network (i.e., type 1 error or false rejection rate).The type 2 process error represents an acceptance of individuals who should not be admitted to a system or network (i.e., type 2 error or false acceptance rate).
In this paper, we investigate differences between active (type 1) and passive (type 2) risk avoidance in trust situations.More specifically, this paper aims to identify appropriate trust positions to reduce/increase the type 1 and type 2 process errors, and then discusses the implications of using a particular trust position in relation to people, processes and technology [4,6,20,27].Sections 2 and 3 clarify and highlight the issue of trust and trustworthiness.The discussion explores the relationship between the Throughput Model and dominant trust positions (see Table 1).
--------------------------------Insert Table 1 about here -------------------------------- The aforementioned processes help to tie trust positions to the Throughput Modeling paradigm, which in turn generates propositions.An initial stage in the scientific process is not observation, but the generation of hypotheses or propositions, which may then be tested critically by observations and experiments.Thus "proposition generation" is a necessary step in addressing critical issues surrounding people, processes, and technology.Likewise, Popper [28] also makes the vital assertion that the goal of the scientist's efforts is not the verification but the falsification of the initial hypothesis.It is understandably unattainable to confirm the truth of a general law by repeated observations.Nonetheless, at least in principle, it is possible to falsify such a law by a single observation.Therefore, the propositions assist in identifying and exploring the dominant six-trust positions' relationship with fraudulent transactions and risk factors.
Finally, we conclude with a summary outlining implication for research and practice dealing with forensic and fraud organisational systems.

Definition of trust
Most literature on trust fails to distinguish trust from trustworthiness.Trust is a social psychological factor, which includes the reduction of control, willingness to accept vulnerability and risk based upon the positive expectations of the actions of the trustee [29].
Trustworthiness, on the other hand, involves the ability, benevolence and integrity of a trustee [30,31].Some scholars view trust as synonymous with trustworthiness and explain trust in the context of personal attributes that impel positive expectations on the part of the trustee [32,33].Whilst some scholars view trust as a behavioral intention rather than a psychological factor [30,33], others view trust as a biological component within the individual, which develops early in life and remains relatively stable through adulthood Webb & Worchel [34] In this regard, Mayer, Davis & Schoorman [30] adopted an integrative model to define trust by using the trustworthy variables (benevolence, ability and integrity) as antecedent of trust.
Their model attempts to separate the trustworthy variables into two major components such as ability component and character component.The ability component measures the 'can do' aspects, whereas the character component measures the 'will do' aspects.Trust decisions affect a company's relationship with its community, customers, employees, stockholders and suppliers [35,36].Thus, the roles of trust positions in achieving competitive advantage are becoming increasingly popular amongst organisations of all kinds and sizes [9,19,37] The impact of trust on organisational performance and increase in productivity has received considerable interest in recent research such as cyber decision-making [38][39][40] e-commerce [41], and accounting/auditing research [42][43][44][45][46].In the trust literature, trust serves as a lubricant to the wheels upon which all business transactions and relationships are based [47,48].Trust plays a central role in every sustainable business endeavor because trust can reduce agency and transaction costs, ensure the smooth operation of transaction, and increase innovation and productivity [49].Trust decisions occur in an environment of uncertainty, where stakeholders face vulnerable situation (risk/uncertain situation) leading to a dependence or reliance on management for security [50,51].Shareholders must trust managers, employers must trust employees, buyers must trust sellers, the public must trust business, and the government must trust business.Unfortunately, there is a scarcity of trust following the prevalence of recent corporate scandals (e.g., Arthur Anderson, Enron, Tyco, Adelphia, WorldCom etc.).The impact of these corporate scandals on stakeholders' trust is significant.
Furthermore, Rodgers [5,19,20] argues that there are two primary trust algorithmic pathways of rational choice; rule-based trust and category-based trust, which underscore the basis of trust relationships.Expertise level, incomplete information, rapidly changing environments, and/or time pressure sturdily influence the implementation of these primary trust algorithmic pathways [20].However, the refinement of the interaction of people, process and technology will influence information exchange and individuals' perceptions.As a result, this can further yield three secondary higher-level trust algorithmic pathways of third-party-based trust, role-based trust and knowledge-based trust [19,20,27].To avoid increasing threats (e.g., cybercrimes resulting in fraud, errors and risks) to business processes and shareholders' trust, we analyse and explore how fraudulent schemes are affected differently by employing one, or a combination of the three trust positions.We also investigate the interrelated processes of the Throughput Model and trust algorithmic pathways that have an impact on decisions affecting organisations.
Advanced Internet technology has now reached a point where achieving improved safety would occur through a better understanding of human error mechanisms [52] and trust relationships [21].Human error is a causal or contributing factor in accidents, particularly in the security industries.Consequently, these trust positions could protect information systems and electronic commerce and the cyber-based technologies and the business environment [53].
For example, cyber-related security threats have presented debilitating consequences for organisations and have negatively impacted economic activities significantly [20,41,54].As errors are intimately bound with the notion of intention, organisations are compounded with decisions regarding type 1 versus type 2 process errors [25].In this regard, Zapf and Reason [54] suggested that errors lead to "the non-attainment of corporate goals, therefore, the dominant trust positions introduced in this study works on the assumption that errors should be potentially avoidable." Moreover, it has been recognised that there is constructive magnitude of trust building system embedded within daily operations of organisations [55][56][57].In particular, the challenges of increasing interpersonal communication and online transaction in a system or network have led many researchers to investigate the impact of online trust on cognitive processes [41,[58][59][60][61][62].The overwhelming conclusion is that cybercrime continues to cause increasing threat to people, processes and technology of businesses, impacting upon organisational values and eroding stakeholders' trust.Trust plays a critical role in developing organisational relationships internally and externally because of its related uncertainty, risk, fear, and interdependence factors in the decision-making process [60][61][62][63] (see Table 2).

Throughput Model Methodology
This paper utilises the Throughput Model (see Figure 1) to gain further insight on how organisations can create an environment that engenders trustworthy behavior.To our knowledge, this is the first study integrating different trust positions, fraud, risks and errors in decision-making algorithmic pathways that might be useful in reducing fraudulent behaviours.
Figure 3 illustrates the key three enablers, which can be captured by implementing the fraud triangle.The fraud triangle consists of perceived opportunity, perceived pressure/incentive, and rationalisation justification of fraud [5,64].The fraud triangle diagnoses high-risk fraud situations.Perceived opportunity is the possibility of entry into a situation where fraud can be carried out, for example, where there are weaknesses in an internal control system.Perceived pressure/incentive addresses the motivation or underlying drive for individuals to commit fraud.Rationalisation represents the propensity for individuals to 'bend' their ethical positions, moral standards, among others, to justify their fraudulent activities [5].
Perception and information depend on each other in the Throughput Model because information can influence how a decision maker frames a problem (perception) or selects evidence (information) to be used in the decision-making process.
In Figure 1, perception (P) can be influenced by an individual's educational background, religion, belief, communal values, upbringing, etc. Perception depicts the framing of an organisational environment, which involves risk assessment, perceiving fraudulent transactions such as cyber fraud, high risk transactions, cyberattack etc.Previous studies posit that a change in framing (i.e. risk perception) influences risk preferences, and risk attitude.
Thus changes in risk perception may lead to a pronounced shift from risk aversion to risk taking [23,66].brought into question rational-choice theories of human decision making due to violation to the description-invariance principle (i.e., fixed preferences across different descriptions of identical choice problems), one of the least questionable tenets of rationalchoice theories.
Information (I) includes customer databases, organisations' databases, forensic evidence, social networks, financial information, governmental agencies' reports on fraud, etc.
In the judgment (J) stage, financial and non-financial information are scrutinised and weight is placed on key information which is compared to other alternatives.We argue that experts such as auditors, forensic accountants, cybercrime investigators etc. usually retrieve from their knowledge base and expertise to examine situations to collect evidence.Finally, in the decision choice (D) stage, we argue that experts make trustworthy decisions based on combinations of perception, information, and judgment.
In addition, the Throughput Model in Figure 1 reflects interdependency between perception (P) and information (I).That is this relationship (PI) reflects a neural network that simulates human thought and make deep learning techniques possible for machine learning by drilling down on informational (I) databases [67].Deep learning (also known as deep structured learning or hierarchical learning) is part of a wider family of machine learning methods based on learning data representations, as opposed to task-specific algorithms [68].
Rodgers [19,20] argued that trust positions in the Throughput Model play a role as a cognitive process, which is rationally based on one's interest (incentive), for normative reasons, or for reasons of character or psychological disposition.Therefore, the underlying trust depends on the assessment of the trustworthiness of another in a particular situation [69].Most importantly, the Throughput Model enables decision makers to understand why individuals have selected information which supports their trust positions and have ignored other information that does not support their positions.The following sub-sections discuss the six algorithmic trust pathways based on the Throughput Model.These algorithmic trust pathways represent: 1. Trust as a rational choice: a presumed understanding of the other party's desires and intentions.
2. Rule-based trust: trusting someone due to a strictly enforceable normative rule or legal system.

3.
Category-based trust: social networks sharing some common experience, tradition, education, custom, culture, religion, and so forth.
4. Third-party-based trust: people use themselves or the people around them as their basis for defining trust.

5.
Role-based trust: tied to formal societal structures, depending on individual attributes.
6. Knowledge-based trust: people have enough relevant and reliable information about others to understand them and accurately predict their likely behavior.
The following sub-sections discuss each algorithmic pathway and its proposition: (1) PD (rational-based trust) According to Rodgers [19,20,70], the PD algorithmic pathway represents trust as a rational choice, which is the quickest way to make a decision.Here, the trust decision takes perceptual preference as an important determinant for a decision choice because individuals are usually motivated to act in their perceived self-interest.In the rational-based trust, individuals prioritise the maximisation of their expected gains and the minimisation of their expected losses.This trust algorithmic pathway primarily manifests in a situation of low risk/high certainty.For example, where the momentary amount involved in a transaction is negligible, individuals may adopt a rational-based trust position.In addition, time pressure, difficulties in interpreting information and rapidly shifting environmental conditions are amongst the factors which can influence people to select this particular trust algorithmic pathway.In addition, the level of knowledge or expertise of individuals can influence people to select a rational-based trust position.Research suggests that time pressures may alter both the cognitive and emotional processes involved in risky decision making [71][72][73].
For example, time pressures may have a damaging effect on cognitive processes, such as impairing working memory capacity (e.g., [74,75]) and plummeting decision accuracy (e.g., [75]).In addition, subsidiary anticipatory stress has a negative influence on learning and information processing abilities [73].Hence, in a high-risk situation, certain individuals with a requisite level of expertise will ignore incomplete information and judgment and make a quick decision choice.For example, internet users may have many barriers to international cyber transactions resulting from disparate regulations in various foreign countries and an overall deficiency of familiarity and lack of information with webpage platforms.

Proposition 1a:
In a time-pressured environment of incomplete information, high levels of expertise between the parties (online or offline) will result in a highly trustworthy relationship.

Proposition 1b:
In a time-pressured environment of incomplete information, low levels of expertise between the parties (online or offline) will result in a poor trustworthy relationship.
( situation, results that depend entirely on trust are expected to decline in the long term.On the contrary, when an organisation's approach calls for fewer rules, employees are allowed to bring their innovations and initiative to bear in the production process.This will result in high productivity and less transaction cost [78][79][80][81].
When situations are less than rule-based, a higher level of trust will have the opportunity to result in certain situations where information on the internet is neither weak nor strong in directing a user toward an outcome.Trust helps to ''tip the scales'' as trust helps a person to interpret previous behavior and/or assess the future behavior of another party.For example, it is impractical to have written rules that deal with trust issues when communicating on a webpage based on feelings, values, and beliefs.
Proposition 2a: Trustworthy relationships that are based on high level transparent, responsible, accountable and enforceable rules and regulations will lead to low level false rejection and/or false acceptance into the network system.
Proposition 2b: Trustworthy relationships that are based on low level transparent, responsible, accountable and enforceable rules and regulations will lead to high level false rejection and/or false acceptance into the network system.
(3) IJD (category-based trust) Category-based trust refers to direct information that has an impact on judgment, which in turn influences decision choice.The category-based trust emphasises the fact that individuals are subject to preformatted information regarding relationship types [20].The category-based trust operates on the philosophy that people and relationship types can be grouped into segments with similar characteristics.For example, organisations can categorise their suppliers or customers into different segments.In this situation, the level of trust is high because organisations have adequate and reliable information about each segment.On the other hand, the level of trust will be low if organisations have incomplete or unreliable information about the segment.Category-based trust highlights the relationships that exist amongst individuals within social networks [82][83][84].Individuals within a particular social group usually share similar values, cultures, norms, and belief systems etc. [84].The strength of a category-based relationship is linked to its frequency, reciprocity, emotional intensity and trusting relationships to build slowly and incrementally over time, especially when it involves inclusion in a category.For example, relative knowledge regarding a particular website as well as other friends and family members use of the website can be reflected in completing future monetary transactions on the same website.
Proposition 3a: Complete and reliable information about the organisation' customer/supplier segments will lead to stronger online trust relationships.
Proposition 3b: Incomplete and unreliable information about the organisation' customer/supplier segments will lead to weaker online trust relationships.
These three primary algorithmic pathways either emphasise problem framing (P) or information (I), but not both [20].Furthermore, the three primary algorithmic pathways encapsulate an understanding of trust and distrust within people relationships [85][86][87].We can associate trust (high, low), no trust, and distrust (low, high) in the algorithmic pathways with values that vary from +1 (the highest trust) to -1 (the highest distrust).Each path can have a positive (+), negative (-) or zero (0) sign to represent the magnitude of trust, distrust and no trust.
Rodgers [20,70,88] argued that trust algorithmic pathways can be interrelated by perception and information via three secondary higher-level trust algorithmic pathways; rational-based trust (PD), rule-based trust (PJD), or category-based trust (IJD).This trust algorithmic pathway relies on the third party as a channel of trust [20].In this instance, decision makers use people around them as a basis for defining their trust pathways to serve as to their existing perception.As a result, one is more certain of his or her trust (distrust) in another.The third-party based trust therefore depends on the indirect connection between one entity and a third party and the indirect connection between two entities.For example, third parties as conduits of trust assume that an internet user desiring to purchase shoes on the internet relies on using people around them who promote buying shoes on a particular website.Third-party information serves to reinforce existing webpage use, making one's perception more certain of his or her trust (or distrust) in a particular webpage.
Proposition 4a: Relevant and reliable third-party information can result in a high trust relationship between two parties involved in a network transaction.
Proposition 4b: Non-relevant and unreliable third-party information can result in a low trust relationship between two parties involved in a network transaction.
(5) PIJD (role-based trust) The basis of trust in this algorithmic pathway depends on the role (profession, expertise, position, attribute, authority etc.) of the party to be trusted [20].In this algorithmic pathway, people trust that specific role types can deliver specific desire outcomes.An example of rolebased trust is gaining certification from an engineer, accountant, medical doctor etc.For example, shareholders trust in the role of auditors because they believe that auditors have skills and professional expertise to audit the accounts of organisations.In addition, audit/accounting experts ensure that all of their members adhere to strict professional conduct.Furthermore, employees are prepared to accept a manager's decision due to the manager's organisational role and authority.Individuals' trust in their organisational authority (management) shapes their willingness to follow the rules and regulations of the organisation [89].In addition, reliable information about personal qualities, social limitations of others and existence of trustworthy communication architecture are crucial for making trustworthy decisions [90][91][92].
In other words, trust "is cultivated out of productive inquiry rather than imperceptive acknowledgment" [93].
Examples of role-based trust are certification of a web-based plumber or medical doctor.That is, we trust a medical doctor since we trust the practice of medicine and believe that medical doctors are trained to apply valid principles of medicine.In addition, we have evidence every day that these principles are valid when we observe certain remedies recommended to save lives.
Proposition 5a: The level of expertise is high of the auditor, forensic accountant or cybercrime investigator can determine an individual's trustworthiness is high in order to minimise both false rejections and false acceptance into the network.
Proposition 5b: The level of expertise is low of the auditor, forensic accountant or cybercrime investigator can determine an individual's trustworthiness is low in order to minimise both false rejections and false acceptance into the network.This algorithmic pathway expands on the rule-based trust in that past and/or present information (knowledge-based), can influence individuals' perceptions, which in turn affects their judgment and decision choices [20].The knowledge-based trust algorithmic pathway is influenced by fewer time pressures and a reasonable of expertise in an unstructured environment in order to form judgment about the probability of trustworthy behavior of others [20].In this trust algorithmic pathway, trust is considered as a function of 'general expectations' that is premised on past and present information.Knowledge-based trust transpires when individuals or organizations have enough, relevant, and reliable information about webpage-based companies in order to understand them and accurately predict their likely behavior.For example, organization' web pages on the internet vary by size and industry and the environment they carry out their operations is determined by legal traditions.Consequently, knowledge-based trust pathways permit flexibility in the design of mandatory and nonmandatory measures in a global cyber context.
Proposition 6a: Reliable and relevant information will encourage higher [94] levels of trustworthiness over and above rules and laws.The type and level of trust pathways employed by organisations may influence its productivity, competition and value.
Proposition 6b: Unreliable and irrelevant information will encourage higher [94] levels of trustworthiness over and above rules and laws.The type and level of trust pathways employed by organisations may influence its productivity, competition and value.

Conclusions and implications
Artificial Intelligence techniques such as trust decision-making algorithms can assist our understanding of employing machine learning and deep learning for solving fraud type problems in the future.This conceptual research paper has argued that the first step in the scientific process is not observation, but the generation of propositions (or hypotheses), which may then be tested critically by observations and experiments.Type 1 and type 2 errors can occur because of people, processes, and technology bias (observer, instrument, recall, etc.).
Therefore, this theoretical research paper has identified appropriate trust positions to implement in order to address type 1 and type 2 errors.Type 1 error can contribute to inefficiencies and higher transaction costs, that can spell reduced productivity, as depicted by a cyber system.Furthermore, admittance of type 2 error creates fraud triangle characteristics consisting of perceived opportunity, perceived pressure/incentive, and rationalisation justification of fraud.These characteristics are systematic of a problematic cyber system.
Our implications of using a particular trust position depend on the controlling factors influencing type 1 and type 2 errors in relationship with people, processes and technology.
Furthermore, the six dominant trust positions or algorithmic pathways were tied to situations that could lead to type 1 or type 2 errors.These trust positions denote: (1) rational-based trust, Trust behavior is a prerequisite for knowledge production and its exchanges.
Individuals are not machines.They think and have feelings.When they pursue activities or communicate ideas, they are trusting in others.In addition, trust as a relational and institutional asset supports competitive advantages.Therefore, trust can be viewed as an intangible asset that adds value to an organisation.
A vast variety of Internet devices, including institutions, norms, cyber ware, etc.,  Overly rigid presumption of other party's desires and intentions; thereby, denying the correct people entering or using cyber system.
Overly accommodating presumption of other party's desires and intentions; hence, allowing the inappropriate people entering or using cyber system.
(2) Rule-based trust Guidelines and procedures are very strict.Result: prevent admission into cyber system of individuals who should be allowed in.
Guidelines and procedures are too lax.Result: Wrong individual's admission into cyber system. (

3) Category-based trust
Appropriate people in the same social networks (i.e., sharing some common experience, tradition, education, customs, culture, religion, etc.) NOT allowed in the cyber system due to strict system of classification.
Wrong people in the same social networks (i.e., sharing some common experience, tradition, education, customs, culture, religion, etc.) allowed in the cyber system due to WEAK system of classification.

(4) Third-party-based trust
People DENIED use of cyber system due to overly critical use of supporting information sources for reliability and relevance.
People ADMITTED to use cyber system due to weak supporting and relevant information.

(5) Role-based trust
People DENIED use of cyber system due to overly critical formal structures, judging individual attributes.
People ADMITTED to use cyber system due to weak formal structures, judging individual attributes.

(6) Knowledge-based trust
People DENIED use of cyber system due to overly critical evaluation of relevant and reliable information about others to understand them and accurately predict their likely behavior.
People ADMITTED to use cyber system due to weak evaluation of relevant and reliable information about others to understand them and accurately predict their likely behavior.

Cost Assessment
Costs (actual costs plus manager's credibility) associated with scrambling the organisation to find the nonexisting virus.
Replacement cost for the damage done by the virus, and replacement cost for a new or modified system.

( 1 )
P  D Trust as a rational choice (2) P  J  D Rule-based trust (3) I  J  D Category-based trust (4) I  P  D Third parties as conduits of trust (5) P  I  J  D Role-based trust (6) I  P  J  D Knowledge-based trust ) PJD (rule-based trust) This trust position emphasises the 'power base', i.e. the use of rules, laws regulations etc. to influence the trust position of individuals [20].The rule-based trust can be categorised under explicit and implicit contracts.Under the explicit contract, the individual trust position is influenced by factors including his/her contract of employment, job description and organisational policies and procedures.The implicit contract includes the individual's own personal values and the organisational culture, values, norms etc.In a risky/uncertain environment, organisations use structures and power to influence the individual trust position.The structural and interpersonal components of rules are likely to influence perceived trust[76].With the rule-based trust, direct information is ignored due to either its unreliability or incompleteness.Currall and Epstein[77] argued that, "because rule-based trust involves personal consequences; trust position under the rule-based trust is individual oriented."Also, individuals may adopt the rule-based trust position as a result of certain influences such as some sets of spiritual doctrine, codes of trust for professionals (accountants and auditors), codes of conduct specific to certain organisations and social values etc. Rules, practices and mechanisms are unlikely to change suddenly.Rather, they are mentally represented as assimilated knowledge that can influence the individual trust decision.In a strong rule-based

Firstly, information
source (I) conciliates and changes trust as a rational choice into third-partybased trust (IPD).Next, problem framing (P) reconstructs category-based trust (IJD) into role-based trust (PIJD).Finally, information (I) transforms rule-based trust (PJD) into knowledge-based trust (IPJD).The remaining three secondary higher-level trust algorithmic pathways supplement the primary algorithmic pathways by adding either problem framing (P) or gathering information (I), and this is discussed as follows.(4) IPD (third-party-based trust)
enables individuals/organisations to cooperate in an efficient and effective manner.The Throughput Model can be useful in understanding what causes individuals to act in a manner whereby they do not exploit cyber world for positive results.Trust augmented in a positive manner is 'good' for internet traffic, according to the ethical principles of normative philosophy, not according to the moral standards of a given group or culture.Beliefs about what is right, just and fair are possible influences on information network systems.The management of knowledge and technology in organisations is critical to competitive advantage and organisational success.This study highlights how decision-makers' perceptual framing, along with information can greatly influence decision choices.The Throughput Modeling perspective discussed in this paper reinforces the fact that different algorithmic pathways are dependent upon risk factors embedded in trust positions representing cognitive, behavioral, individual and social inputs, that modifies their decision choices.Future research can investigate whether a particular trust position for cyber platforms supported by a particular decision-making pathway is more appropriate given a particular situation involving trust.In addition, future research can explore which decision-making pathway can typify better relationships between organisations and individuals when communicating across the Internet.Finally, the Throughput Model different algorithmic pathways can allow us to better understand how trust is nurtured and eroded as different parties interact.