Management Practices Business Strategy

Management Practices and Business Strategy

Despite the fact that research findings are mixed, it is beyond a reasonable doubt that management practices are related to productivity, growth, decline, and failure in organizations. This report sets forth a multidisciplinary review and summary of evidence of such a relationship. The report presents a thorough and meticulous scrutiny of three peer-reviewed journal articles in terms of how management practices relate to growth, productivity, failure, and decline of companies. The articles include Bloom et al. (2012) study, Sadun et al. (2017) study and Teece (2007) study on management practices and explicating dynamic capabilities respectively. Taken as a whole the three research articles share similar sentiments. They both establish a solid positive relationship between management practices, growth, productivity, failure, and decline. Nonetheless, in extensive literature review, research findings seem equivocal.

Some researchers have established a positive relationship between management, growth, productivity, failure, and decline, some have found a negative correlation while others no correlation at all (Bender, Bloom, Card, Van Reenen & Wolter 2018). According to Bloom et al. (2019), it can be argued that the deficiency of a consensus might have been prompted by either issues related to the level of analysis or measurement. With this said, there is an urgent need for further research especially in terms of a multi-level approach to evaluate the impact that management practices have on productivity.

Management Practices, Productivity Growth, and Decline

There have been endless debates as to whether organizations are likely to succeed if they embrace good management practice. Scholars have been conducting studies for over a decade now to end these debates. The scholars have endeavored to reexamine the long-held assumptions to confirm whether they stand the test of time (Bloom et al. 2019). Since the time of Frederick Taylor’s work on principles of scientific management, organizations have traditionally followed a formalized set of best practices including three practices that are considered essential to good management such as targets, incentives, and monitoring. Most organizations around the world are poorly managed. The well-managed organizations establish achievable targets on productivity and growth as well as gauge the promotions and compensations they give on attaining the set targets (Bloom, Sadun & Van Reenen 2012). Unlike poor managed organizations, these companies continuously measure results.

Better management practices are strongly correlated with superior performance, increased return on capital, as well as increased productivity. It is clear that core management practices cannot be taken for granted. With this said, there are significant differences in terms of how organizations accomplish basic tasks such as grooming talent and setting targets. Those organizations with robust managerial processes perform significantly superior on metrics of high level like profitability, productivity, longevity, and growth (Ghoshal 2005). The differences in performance and quality of such processes persist with time signifying that core management practices cannot be replicated easily.

Operational excellence matter a lot but it should be viewed as a critical complement to strategy. In this regard, if an organization cannot put its operational fundamentals right then it does not matter how good the strategy is (Wong 2020). Again, if organizations maintain, for instance, sound management practices, these firms can leverage them to create dynamic capabilities like evidence-based decision making, data analytics, as well as cross-functional communication that is crucial to thrive in a volatile and uncertain industry. As much as it requires a sizable investment in processes as well as people throughout the bad and the good times, attaining managerial competence calls for ultimate effort and the investments bring along a key barrier to imitation (Yeow, Soh & Hansen 2018).

Productivity is a notable metric of economic progress and revolves around issues related to economic growth, incomes, as well as competitiveness. Productivity is molded by management practices, global market structures, competencies, and skills of people, as well as the adoption of new technologies (Yeow et al. 2018). A growing body of evidence posits that organizations that engage in target setting, performance-centered human resource management, as well as extensive use of data analysis are more productive and exhibit increased productivity growth levels as opposed to those with fewer formalized management practices.

This evidence has a particular focus for the manufacturing sector where most recent studies have been based; nonetheless, it is also becoming strong for the service sector as well (Bloom et al. 2019). Information on management has been lacking until recently. Small sample analyses and case studies can avail expedient anecdotes; however, they fail to generalize to the wider economy because of unrepresentative samples and a narrower focus.

In the contemporary turbulent competitive business landscape, organizational managers are striving to achieve competitive advantages to beat competition through the efficient as well as effective use of resources. Good management practices at all organizational levels has been proven and is increasingly being accepted as the sure way of improving productivity and growth (Bloom et al. 2012). With this said, improved levels of productivity allow organizations to fulfill all their obligations to suppliers, government, stakeholders, consumers, as well as employees while still staying competitive.

Research shows that organizations that apply the accepted management practices across the globe perform better that those that fail to do so. This assertion implies that the improved management practice is among the most effective approaches organizations can employ to outpace their rivals (Ghoshal 2005). Greater competition pushes for improved management practice, labor market flexibility, on the other hand, leads to better habits of people management and the well-managed organizations are those likely to have highly educated employees.

Dynamic Capabilities and Management Practices

Organizations that maintain sound management practices leverage them to create dynamic capabilities. Recent studies have put a particular focus on the significance of dynamic capabilities to organizational success and longevity (Teece, Peteraf & Leih 2016). With reference to organizational theory, the dynamic capabilities entail an organization’s ability to build, reconfigure, as well as integrate the external and internal competencies to address the fast-changing business environment (Fainshmidt, Wenger, Pezeshkan & Mallon 2019). The term dynamic capability was first coined by David Teece in the year 1997. Dynamic capabilities reconcile incongruous philosophies that an organization can be stable enough to deliver to its clients original and distinctive value while remaining adaptive enough to adjust when conditions suggest so.

According to Wong (2020), dynamic capabilities are tied to original management practices and business models making it hard to imitate them; they allow for extension, modification, as well as creation within a firm. The development of the iPod, for instance, is a good example of dynamic capabilities (Felin & Powell 2016). After realizing that the mp3 players were aesthetically unattractive and large, Apple grabbed the opportunity to design smaller and more appealing iPods. The company then switched its focus to consumer electronics as opposed to just sticking to computers.

The move has allowed Apple to dominate both the music and portable digital music player industries. This approach by Apple depicts the company as a creative and aesthetically focused firm. With this said, developing dynamic capabilities depend on three core organizational activities including sensing, seizing, and transforming (Sadun, Bloom & Van Reenen 2017). Sensing entails the evaluation of consumer needs as well as opportunities that are external to the firm; seizing which entails the reaction of a firm to the needs of the market to maximize the company value including securing access to resources as well as designing innovative business models.

Transforming, on the other hand, involves the renewing of organizational processes as well as maintaining the processes relevance to customers (Shao 2019). This calls for managers to continuously improve, iterate, as well as streamline processes.

The dynamic capability concept has been linked to a resource-based view of the organization as well as to the idea of ‘routines’ in the organizational evolutionary theories. The dynamic capabilities emphasize more on the concept of competitive survival to address the changing business conditions while the resource-based view stresses on sustainable competitive advantage (Felin & Powell 2016; Yeow et al. 2018). It is argued that dynamic capabilities serve as a bridge between evolutionary approaches to organization and economics-based strategy literature (Yeow et al. 2018). The dynamic capabilities theory entails the establishment of stratagems for managers of organizations to adapt to rapid intermittent change whereas sustaining the minimum capability levels to ensure competitive survival (Sadun et al. 2017).

Industries that rely on particular traditional manufacturing processes are not positioned to alter the process on short notice especially when tech arrives. However, when this transpires, organizational managers are required to adapt their routines to make the most out of their resources whereas concurrently planning for the future changes in processes as resources denigrate (Shao 2019). With this said, the type of change being emphasized by the theory of dynamic capabilities are the internal capabilities and not the external forces of business.

Although further research is required to measure dynamic capabilities as well as suitably apply the concept to practical management contexts, many scholars argue that the theory of dynamic capabilities is tautological and vague (Sadun et al. 2017; Teece et al. 2016). This assertion seems to hold true, as much as the theory is very helpful when it comes to addressing the rapid changing business conditions, it still fails to explain how to do so.

According to Teece et al. (2016), the theory’s capabilities are difficult to operationalize and identify as well and at other times, the very capabilities could prompt the core capabilities into turning the core rigidity (Felin & Powell 2016). In this regard, some studies have argued that it still hard to apply the theory in its present state without being in a position to develop, identify, as well as specify the capabilities. Nonetheless, recent studies have introduced a mechanism from the theory of dynamic capability for net enablement called the “Net-Enabled Business Innovation Cycle” to further an understanding as well as help predict how organizations transform the dynamic capabilities linked to net-enablement into consumer value using the theory (Shao 2019; Sadun et al. 2017).

The net-enabled organizations are able to constantly reconfigure their external as well as internal resources to apply digital networks in exploiting opportunities via routines, rules, analysis, and knowledge to create consumer value from the net-enablement capability (Sadun et al. 2017).

Businesses maintain portfolios of the hard to trade and idiosyncratic assets as well as competencies and competitive advantages can flow right from the sustenance of scarce but difficult to imitate resources such as expertise and novel ideas (Felin & Powell 2016; Teece 2007). Nonetheless, in the rapid changing business environment characterized by stiff competition, sustainable competitive advantages demand more than just the possession of hard to imitate assets; it demands the unique and hard to imitate dynamic capabilities (Teece 2007). Such dynamic capabilities can be leveraged to constantly help to extend, protect, create, as well as upgrade the unique organizational assets.

How the Findings Might Help Improve the Performance of the Firm

For the company that I operate in, the findings of this report are good news. These findings posit that the company has access to any performance improvements only by applying and implementing good management practices already used by other firms. Very few firms have management practices that are above average and the need to spread the word to thousands of underperforming organizations is urgent. A bigger part of the opportunity for improvement rests with local managers. To determine how far behind the company is, managers must rigorously assess their own management practices and contrast them with others. These managers can benchmark themselves by industry and country (Bender et al. 2018). Awareness is very low and should be the first initiative to be taken by the company managers.

After establishing where they need to improve, managers must start to embrace a slow but steady growth. Successful companies have reached greater heights by making good beginnings through the identification of processes that require immediate change and then afterwards developing the metrics to monitor advancement over long and short term. In this regard, goals must be visible to all employees and translated into group, individual, as well as company wide targets that are to be monitored meaningfully.

Of course, immediate results cannot be expected but by establishing powerful incentives, focused targets, as well as constantly monitoring performance could be effective in diving future greater shifts. Global organizational have be obliged to embrace a systematic management approach and only through maintaining robust as well as effective management practices have they managed to replicate similar performance standards across various cultures, markets, and regions (Ghoshal 2005). With this said, these organizations are realizing the benefits that come with good management practices including better capital returns, increased growth, as well as higher productivity. These benefits are readily accessible to other firms but very few have made efforts to obtain insights into the quality of their management practices. Nonetheless, those that do so enjoy the access to cost effective as well as sustainable competitive edge.

Management Practices Business Strategy
Management Practices Business Strategy

Lessons

The review of the three articles is fundamental to the credibility and rigor of research in social science. Obviously, the articles feature fundamental differences when it comes to format, length, and content but they both share similar sentiments. The articles are characterized by work built on management surveys across various companies. A critical lesson drawn from these studies is that it is a limitation to focus only on senior managers when providing feedback in research surveys because in sociological literature, there is varying opinions between senior managers and workers within the same company when it comes to evaluating management practice. With this said, workers might provide a valuable counterpart point of view to their managers’.

Another lesson drawn from review of articles is that unlike quantitative studies, qualitative research reports need a thick description of phenomena and context. This result from interpreting and describing observed behavior within a particular circumstance. Reports placed in this context must go beyond mere fact to present detail as well as webs of relationships that join dots to establish event sequence for the topic in question. Actions, meanings, feelings, as well as voices of persons are heard in thick description. Further, a thick description gives a balanced view of interpretation and analysis while showing that the research reflects thoroughness, appropriateness, and rigor. I have learned that this approaches supports transferability and trustworthiness of research to other contexts.

Further, another lesson drawn from the review of the articles is that it is paramount to establish a clear sense of urgency, small goals, and a clear research focus to be able to sort through immense data volumes as well as coalesce different data pieces towards building a remarkable interpretation of data with extrapolations on composite and dynamic literature. Additionally, tuning an empirical psychological science focus that is characterized by abstract reasoning and analysis helps to establish a contextual sensitivity that allows one to develop strong associations between the perspective of the author and the context in which research is founded.

Another lesson drawn from the review of articles is that to successfully scrutinize literature, a person must develop a well-structured workflow and a detailed oriented mindset to achieve a methodical review or else one might get overwhelmed and focus on mere data exploration. Further, a person needs to develop inspective reading and read ravenously to point out specifics as well as gaps in literature. However, for a successful review and summary of literature, a person needs to develop a data-driven mindset that will allow the individual to think outside the box to ensure his/her evaluation expertise serve the purpose.

References

Bender, S., Bloom, N., Card, D., Van Reenen, J., & Wolter, S. 2018. Management practices, workforce selection, and productivity. Journal of Labor Economics36(S1), S371-S409.

Bloom, N., Brynjolfsson, E., Foster, L., Jarmin, R., Patnaik, M., Saporta-Eksten, I., & Van Reenen, J. 2019. What drives differences in management practices? American Economic Review109(5), 1648-83.

Bloom, N., Sadun, R. & Van Reenen, J., 2012. Does management really work? Harvard business review, 90(11), pp.76-82.

Fainshmidt, S., Wenger, L., Pezeshkan, A., & Mallon, M. R. 2019. When do dynamic capabilities lead to competitive advantage? The importance of strategic fit. Journal of Management Studies56(4), 758-787.

Felin, T., & Powell, T. C. 2016. Designing organizations for dynamic capabilities. California Management Review58(4), 78-96.

Ghoshal, S. 2005. Bad management theories are destroying good management practices. Academy of Management learning & education4(1), 75-91.

Sadun, R., Bloom, N., & Van Reenen, J. 2017. Why do we undervalue competent management? Harvard Business Review95(5), 120-127.

Shao, H. X. 2019. Developing Organizational Dynamic Capabilities in Project-Based integrated Solution: A Study of Servitization in Chinese Water treatment Industry.

Teece, D., Peteraf, M., & Leih, S. 2016. Dynamic capabilities and organizational agility: Risk, uncertainty, and strategy in the innovation economy. California Management Review58(4), 13-35.

Teece, D.J., 2007. Explicating dynamic capabilities: the nature and microfoundations of (sustainable) enterprise performance. Strategic management journal, 28(13), pp.1319-1350.

Wong, A. 2020. The Key to Keeping Up: Dynamic Capabilities. California Review Management.

Yeow, A., Soh, C., & Hansen, R. 2018. Aligning with new digital strategy: A dynamic capabilities approach. The Journal of Strategic Information Systems27(1), 43-58.

Relevant Posts

Strategic People Management

Business Management Dissertations


If you enjoyed reading this post on management practices and business strategy, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.

Community Economic Development

Community Economic Development

Title: Community Economic Development. Effective public administration is key to promoting community economic development. Community economic development is term used to describe collective action by members of a particular community to generate common solutions to their economic problems. Such initiatives may involve both small and large groups (Green & Haines, 2012). Public administration on the other hand refers to civic leadership of community affairs that is held directly responsible for executive roles (Calabrò & Della Spina, 2014). Public administration aims at promoting respect and contributing to the enhancement of the worth, dignity, and potential of members of the public. Effective public administration enhances the success of community economic development initiatives.

The model of the community development process, also often referred to as the asset model of development provides mechanisms to manage community based and development infrastructure. It is mainly concerned with the management of human resources (Patterson, Silverman, Yin, & Wu, 2016). It provides processes and practices that are aimed at promoting the effective and efficient management of human capacity in public service (Loh & Norton, 2015). The model also champions the making of human resource management (HRM) decisions that are accurate, fair, transparent, and free from political influence (Green & Haines, 2012). There are six major components of the model of the community development process. These include strategic planning, organizational design and classification, competency profile development, resourcing, learning, development, and certification, and leadership development.

Strategic planning requires that the business needs of a community be identified first before any investments are made. The community economic development program that is developed should target these gaps and needs. Strategic planning requires that all decisions be made based on existing facts (Christens & Inzeo, 2015). As a community developer, one should engage members of the public to determine their needs and work with them to develop solutions that best satisfy these needs. For example, the needs of an urban community vary from those of a rural society. Subsequently, programs to be rolled out in the two areas should vary to some extent.

Organizational design and classification is the second component of the model. It requires that standardized services and products be developed with the aim of providing an integrated and consistent approach to staffing. Subsequently, well defined and or organized job positions and streams are developed (Green & Haines, 2012). In community development, organizational design and classification is important since it helps in the division of work. Different persons are given specific responsibilities (Majee, Goodman, Reed Adams, & Keller, 2017). For example, a engineer working in a community development project should have clearly defined responsibilities.

Community Economic Development
Community Economic Development

Competence profile development is the third component of the model. It champions the development of competency based products that help define and support job success. As such, public administration managers are in a position to use profiles for the purposes of staffing (Majee et al., 2017). At the same time, employees are in a position to identify that which is required of them for them to get promotions (Calabrò & Della Spina, 2014). A community developer should be involved in the designing of learning programs fill the skill and knowledge gaps of the human resources. These programs should target all fields.

The fourth component of the model is resourcing. It involves the launching of collective recruitment and staffing programs. This helps increase access to qualified human resources. At the same time, it ensures that vacant positions are filled in a quick, effective, and efficient manner (Green & Haines, 2012). The model enables community developers to understand the need to have fully staffed team at all times. For example, it would be not make sense to run a community development program whose agenda is housing without an architect, an engineer, and an accountant.

Learning, development, and certification is the fifth component of the model. It develops programs that are aimed at enhancing the skills and capabilities of human resources (Patterson et al., 2016). It ensures that employees have access to all the programs and tools they need to improve their competencies and skills (Christens & Inzeo, 2015). As such, they are in a position to advance in their careers. A community developer must understand that change is inevitable. As such, they must always seek to update the skills and knowledge of their teams.

The last component of the model is leadership development. It involves expanding the capacity of individuals to enable them perform leadership roles (Patterson et al., 2016). In community development, this is important since it provides persons with the right attitudes and abilities to influence others positively (Calabrò & Della Spina, 2014). A community developer must understand the need to create strong leadership. They must mentor leaders who are to guide the implementation and maintenance of community based project (Majee et al., 2017). This is important to ensure the sustainability of these community economic development programs.

Zoning and Community Economic Development

Zoning and comprehensive planning continue to be important tools for community development. Zoning is a development regulation tool. It involves the division of the existing land into zones. Each zone is dedicated to a specific purpose. In community development, zoning is key in ensuring that the goals of the comprehensive plan are implemented in an effective and efficient manner (Green & Haines, 2012). Public administrators must be keen to ensure that zoning guidelines are adhered to. Through zoning, developers can improve their communities by encouraging the use of the existing land resources in a sustainable manner (Loh & Norton, 2015). Zoning promotes sustainability of development programs since it allocates land resources for all the needs of a community. For example, zoning ensures that members of a community have adequate land for residential, farming, and economic purposes.

Comprehensive planning is the process through which the goals and aspirations of a community are determined. It results in the development of a comprehensive plan. In community development, a comprehensive plan dictates public policy and in matters land use, housing, recreation, public utilities, and transportation (Green & Haines, 2012). Comprehensive plans may cover either a small or large geographical area (Loh & Norton, 2015). As a developer, one can use comprehensive planning to improve their communities by assessing their needs and working with them to seek appropriate and sustainable solutions (Christens & Inzeo, 2015). A developer must be keen to involve all members of a community in the planning process. This ensures that their comprehensive plan created has the support of all members of the community. Subsequently, the implementation process tends to be easy.

References

Calabrò, F., & Della Spina, L. (2014). The cultural and environmental resources for sustainable development of rural areas in economically disadvantaged contexts-economic-appraisals issues of a model of management for the valorisation of public assets. Advanced Materials Research, 869(1), 43-48.

Christens, B. D., & Inzeo, P. T. (2015). Widening the view: situating collective impact among frameworks for community-led change. Community Development46(4), 420-435.

Green, G. P., & Haines, A. (2012). Asset building and community development (3rd edn.). Newbury, CA: Sage Publications.

Loh, C. G., & Norton, R. K. (2015). Planning consultants’ influence on local comprehensive plans. Journal of Planning Education and Research35(2), 199-208.

Majee, W., Goodman, L., Reed Adams, J., & Keller, K. (2017). The We-Lead Model for Bridging the Low-Income Community Leadership Skills-Practice Gap. Journal of Community Practice, 3(1), 1-12.

Patterson, K. L., Silverman, R. M., Yin, L., & Wu, L. (2016). Neighborhoods of Opportunity: Developing an Operational Definition for Planning and Policy Implementation. Journal of Public Management & Social Policy22(3), 143.

Relevant Posts

Economics Dissertation Topics

Econometrics

Thanks for taking the time out to read this community economic development blog post and I hope you found it useful. I would be grateful if you could share this blog post via Twitter, Facebook, or Google+ I would like to generate as much social media buzz around this post.

Reliability and Validity Academic Research

Reliability and Validity

Inter-rater reliability

Reliability and Validity in Research – This is a statistical concept in the field of research whereby a particular phenomenon is being evaluated or rated by various raters. It is, therefore, the extent or degree to which there is an agreement in the rating scores amongst the multiple raters, which brings about homogeneity and unanimity amongst these raters. To measure inter-rater reliability, it entails taking the total number of ratings in a particular judgment conducted, as well as the counting the accumulative ratings are done in the rating exercise. The total number of agreements is then divided by the total number of ratings and converted into a percentage to give the inter-rater reliability. McHugh (2012) provides a good example of how inter-rater reliability is calculated by reviewing the various methods that have been stipulated by scholars previously.

Test-retest reliability

This is also another reliability aspect. Test-retest reliability is the extent or degree to which results obtained from a particular test (which is similar) and consistent over time. In test-retest reliability, a similar test is administered to the same people in two or more instances and then the results are evaluated. To measure the test-retest reliability, there are two primary formulas applied. The first formula, which is better applied in instances where two tests were conducted in the Pearson Correlation formula that tests how well two sets of data correlate.

The other method is intraclass correlation formula that is applicable where more than two tests were administered. These formulas help calculate the test-retest coefficients that range between 0 and 1. In his article on validity and reliability in social science research, Drost (2011) provides the various reliability and validity aspects and gives detailed examples of the test-retest reliability measurement.

Face validity

Face validity, which is also referred to as the logical validity, entails the extent or degree to which an evaluation or investigation intuitively seems to quantify or measure the variable or rather the theory that it is objectively meant to measure. This, therefore, means that face validity is when a specific evaluation or assessment tool does what it is meant to do to provide results. To measure face validity, one can engage in the assessment of the concepts or ideas to be measured against the theoretical and practical applications.

Predictive validity

This is the measure of how accurate or effective a given value from a research study is and can be used in the future or rather to predict future patterns in the field studied. In their research on the predictive validity of public examinations (Obioma & Salau, 2007) use the predictive validity aspect to predict how the performance of students in public examinations will affect their future academic performances in the university and college level.

Concurrent reliability and validity

This entails the degree to which current test results relate to results from a previous test. For instance, if in the measurement of an individual’s IQ test are taken at two varied intervals, concurrent validity is measured through comparing on how closely similar are these results from the two tests. A good example of research that has employed the use of concurrent validity is the research done by (Tamanini et al., 2004) on the Portuguese king’s health test performed on women after stress. The researchers indicate how this test is applied and measured by using it as their primary test in their research.

Addressing the issues of reliability and validity

On most qualitative researchers, the nature of the data is more important to the researcher than the other descriptive elements of the research. This, however, does not rule out the need for conciseness in the descriptive sections. Reliability in research entails the concerns the stability, consistency of the data as well as homogeneous repeatability of the results if several tests are done (LoBiondo-Wood & Haber 2014). On the other hand, validity entails the accuracy and integrity of the data or results collected from the various tests that a researcher performs. Various researchers address these issues of validity and reliability in different ways, based on the purpose and the kind of research they carry out.

The authors, Obioma & Salau, (2007), go down to research on the effects of public examinations on the future academic performance of students. The focus here, therefore, is more on the data validation to ensure that their conclusions, as well as the outcomes of the results, have the required accuracy and integrity to validate their arguments. The two authors and researchers have applied the aspects of predictive and concurrent validity in their research. In regards to the use of predictive validity, this is where their research question is based on.

Reliability and Validity Research
Reliability and Validity in Research

They have made sure that the data or the arguments that they bring forth as substantially valid and convincing to attain the objective of predicting the future academic performances of the children who undertake the public examinations that are governed by the various bodies in the country. They have however not applied any reliability aspects in their research. At least not anyone that can be easily identified.

In the book by Drost, he has touched on both aspects; validity and reliability. In this book, he has not presented it in a research form but rather brought it out to the readers in the form of a review of both aspects of research, but on the dimension of social sciences. For instance, she has covered the various instances of both validity and reliability, by providing real-life examples and the various methods that can be used to measure the respective instances of both aspects. She approaches the concepts of validity and reliability from a general perspective whereby she accounts for the reasons as to why researchers, especially in education and social sciences, should adopt a culture of ensuring validity and reliability in their results. He explains the various instances of reliability and provides formulas and tools that can be effectively applied to measure these instances. She also provides the various elements that can impact the level of validity and reliability of data or results in research.

In conclusion, the concepts of validity and reliability are important in research. The researcher from various fields should adopt a culture of achieving these concepts in the results they obtain during their research. As Drost argues it, strong support for the validity and the reliability of research not only makes the research highly validated or otherwise believed in but also limits the possible critiques that the research may face. It fills the gaps that may be identifiable in the research. A researcher should be able to understand the various instances of both reliability and validity as well as know when it is appropriate to apply what instance in the research.

References

McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia Medica, 22(3), 276-282.

Drost, E. A. (2011). Validity and reliability of social science research. Education Research and perspectives, 38(1), 105.

Obioma, G., & Salau, M. (2007). The predictive validity of public examinations: A case study of Nigeria. Nigerian Educational Research & Development Council (NERDC) Abuja.

Tamanini, J. T., Dambros, M., D’ancona, C. A., Palma, P. C., Botega, N. J., Rios, L. A., & Netto Jr, N. R. (2004). Concurrent validity, internal consistency and responsiveness of the Portuguese version of the King’s Health Questionnaire (KHQ) in women after stress urinary incontinence surgery. International Braz j Urol, 30(6), 479-486.

LoBiondo-Wood, G., & Haber, J. (2014). Reliability and validity. G. LoBiondo-Wood & J. Haber. Nursing research. Methods and critical appraisal for evidence-based practice, 289-309.

Reliability and Validity Relevant Blog Posts

University Dissertation Topics

Approaches for Research Dissertations

If you enjoyed reading this post on reliability and validity, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.

Critical Thinking and “What-If” Analyses

Critical Thinking and “What-If” Analyses in Management Decisions

Title: Critical Thinking and “What-If” Analyses in Management Decisions

“No problem can be solved by the same consciousness that created it.”

          – Albert Einstein

“We are approaching a new age of synthesis. Knowledge cannot be merely a degree or skill . . . it demands a broader vision, capabilities in critical thinking and logical deduction without which we cannot have constructive progress.”

   – Li Ka Shing

“To every complex question there is a simple answer and it is wrong.”

          – H. L. Mencken

In its simplest interpretation, we all apply critical thinking in our daily lives, often without even giving a nod to the process we use to arrive at routine decisions. The common characteristics of basic decision making that we all use are so elementary: Gathering information and keeping informed about areas of interest and the particulars to be considered before arriving at a decision; asking questions to ensure we clearly understand pertinent factors; brainstorming; weighing the evidence we have gathered, utilizing a “tried and true” method we have adopted or usually rely on, and – in so doing – determining what is actually relevant to the problem or decision at hand; taking historical elements into account, but assessing facts within their current context; seeking to discern the truth of any claims or assertions, and determining if bias exists that would affect facts or outcomes.

This pattern is repeated for all decisions, from the smallest – for instance, what apparel to wear, in light of planned physical activities or appropriateness for an event – to the most important of decisions, such as whether or not to propose or accept an offer of marriage, or what university to attend.

From a more sophisticated perspective, the simple steps commonly used to arrive at a decision can be deconstructed as

  • Systematic questioning
  • Structured problem solving
  • Risk assessment and management
  • Progressive decision-making
  • Management of thought process
  • Arrival at a solution and implementation

Brainstorming can help determine the appropriate framework of inquiry necessary to gather the most pertinent information, which depends, of course, upon the answers being sought. Methodology used in the problem solving process provides the structure, and there are several methods and systems that can be utilized depending on the nature and scope of the factors to be evaluated, and their relationship, if any. The broader the criteria and more interrelated the particular set of decision problems and apparent alternatives, and the more variable in number and threat level the kinds of risks to be considered, the more complicated the methodology must be in order to assimilate all pertinent information and accommodate as many options and outcomes as is possible. Once again, brainstorming is required to envision all potential perils or disruptive forces that might impinge upon the success of an entity or endeavor.

Critical Thinking and “What-If” Analyses in Management Decision Making
Critical Thinking and “What-If” Analyses in Management Decision Making

A simple outranking of one outcome above the next is a concept that provides a variety of alternatives responses and outcomes to unintended events, pairing alternatives to determine the better performing of each pair. Upon determining which alternative is more effective, or outranks the other, these assessments of problem-solving or responsive value can be aggregated into a ranking or partial-ranking scheme which, although it may not deliver a definitive answer, offers a reduced “shortlist” of acceptable alternatives.

Progressive decision-making tackles one element at a time, in order of importance, placing decisions in a sequence that comprises a plan of avoidance, attack or defense in the face of envisioned obstacles or other developments. Management of the thought process provides a discipline that enables a rational approach to even the most upsetting of possibilities, removing emotion to thereby clarify thought and enable focus. Arrival at a solution and implementation, perforce, requires that the number of likely risks and feasible alternatives be winnowed and refined, to arrive at those scenarios that are most credible, so that they may be addressed in some detail.

Decision Making Criteria

When facing single criterion or limited-criteria problems and decisions a number of relatively simple methods are available to determine the alternative offering the best value or outcome. Elementary decision tools include decision trees that sequentially branch one decision into the next in a basic “this, therefore that” progression; decision tables of alternatives, pro-con analytical comparisons maximax/maximin strategies, cost-benefit analyses. contingency planning, what-if analysis.

All are elementary pencil-to-paper analyses, simple enough to calculate manually, with no need of sophisticated mathematical skill or computational resources.

Multi-attribute optimization problems such as those that are often addressed by planning departments and larger businesses and organizations often reflect a finite number of criteria but an infinite number of alternatives that are feasible.

Relevant Critical Thinking Posts

Mixed Method Research Design

Dissertation Help

If you enjoyed reading this post on critical thinking and “What-If” analyses in management decision making, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.

2007-2008 Financial Crisis Essay

The 2007-2008 Financial Crisis

2007-2008 Financial Crisis – According to Saylor Academy (2012), a financial crisis happens when many financial markets function inefficiently or stop functioning completely; when one or few of the financial markets stop functioning the crisis that result is nonsystematic Saylor Academy (2012). The 2007 financial crisis started with subprime mortgages and in 2008 it turned severe systematic after major financial institutions failed.

The 2007-2008 Financial Crisis was a combination of many things, including: Monetary policy easing, banks taking excessive risks, consumers borrowing more than they could afford, the eventual US Housing Market Crash, stocks and poor risk pricing, the federal budget deficit, excessive leveraging by banks, predator lending, poor underwriting practices and the Federal Budget Deficit. This paper explores how the 2007-2008 Financial Crisis financial happened, what markets were impacted and how it was dealt with.

Monetary policy easing – Deregulating policies that were placed in placed to repeat historic failures is like playing Jenga, eventually everything will fall. According to (Market Oracle Ltd, 2009), the first block of deregulation happened in 1980 with the Depository Institutions and Monetary Control Act of 1980 – this was the first thing the banking system was being let a bit loose after the regulations that were put into place after the Great Depression.

The act accomplished the following: required less reserved from the banks, it created a committee to get rid of federal interest rate caps, increased insurance of Federal deposits, allowed banks to get credit advances from the Federal Reserve Discount Window and finally, it overstepped over state laws that restricted lenders by putting a ceiling on the interest rates they could charge from mortgage loans.

The second piece of monetary easing happened when the government and their great wisdom or greed decided to pick apart key pieces of the Banking Act of 1933 (Glass-Steagall Act of 1933). The act was in place to prevent banks from gambling with people’s savings, it separated commercial banks from investment banks – this was very important because investment banks could not take huge risks with people’s money.

Gramm-Leach-Bliley Financial Services Modernization Act was the drop that spilled the cup or the final straw that broke the camel’s back. This law, removed the last protective barrier that Glass-Steagall Act provided and allowed banks to do whatever they wanted; for example, Travelers investment bank was able to buy Citibank…Remember the law wanted to keep investment banks from using people savings? Well, this last act allowed investment banks to play with other people’s money (Market Oracle Ltd, 2009). Monetary policy easing removed all roadblocks that annoyed banks, but kept people’s savings intact and save; additionally, it gave birth to Subprime lending which later would be a major player in the Housing Market crash.

Banks taking excessive risks: According to (The Economist, 2013) Senator Phil Gramm once was quoted as saying “I look at subprime lending and I see the American Dream in action”. Due to the economy doing so well and low inflation, banks and investors were willing to take more risks in order to get a piece of the action. Banks were being irresponsible with mortgage lending and lower standards with subprime lending, borrowers who should not have gotten loans were able to get into houses they could not afford.

In order for banks to lessen or mitigate the risks, they played the numbers games, they gathered a many high-risk loan and put them together in groups (pooling), depending on the probability of defaults – in theory, this would decrease the risk because what were the probabilities that all borrowers in that pool could default on their loans? (The Economist, 2013)

Consumers borrowing more than they could afford – This comes back to subprime mortgages and just the timing of what was happening with the economy and the housing market. According to (John V. Duca, 2013) – traditionally, borrowers have to have good credit, good income and good debt to income ration in order to be the proud owners of a house with a white picket fence – those borrowers who did to meet the requirements above, would historically not qualify for any loans to buy a house. The ability of more people qualifying for mortgages they could not afford, lead to an increase in the housing market because the economy was experiencing more first-time home buyers.

The increase in demand created an increase in housing prices and it required more money to be borrowed by the people who were already stretched thin on the amount of money they were borrowing (John V. Duca, 2013). Up to this point, banks and consumers were lending and borrowing money banking on best case scenario and not planning for the worst. Added to the situation was the fact that the government had mandated Fannie Mae and Freddie Mac to increase home ownership, so both Fannie Mae and Freddie Mac had purchased lots of subprime mortgages (John V. Duca, 2013).

Secured Lending - 2007-2008 Financial Crisis
Secured Lending – 2007-2008 Financial Crisis

US Housing Market Crash – In the famous quote from Isaac Newton “What goes up must come down”.  Once housing market reached its plateau, mortgage financing and home selling became less attractive and that is when they began to drop in price, lenders and investors started losing money. The first casualty of subprime mortgages happened in April 2007, New Century Financial Corps filed for bankruptcy – after that, all the pooling that was done by experts to mitigate default risk was downgraded to high risk and many small subprime lenders went out of business.  Lenders stopped issuing loans, specially the high interest rate ones (subprime) – this resulted in less people getting loans after that and as a result, less houses being purchased by consumers.

Low demand for houses led to a drop-in price, the famous law of supply and demand had kicked in. Prices dropped so much that borrowers who were trying to sell them could not send them at the price they owed in their loans. Remember that government told Fannie Mae and Freddie Mac to increase home ownership? Well, as a result, Fannie Mae and Freddie Mac suffer major losses an all subprime mortgages they had purchased and insured (John V. Duca, 2013). The housing market was flooded by banks selling their foreclosed/repossessed homes, people trying to sell their houses because they get foreclosed, people doing short-sales and in addition the market was getting the normal number of houses being sold the usual sellers (new construction, people moving, etc.).

US House Prices - 2007-2008 Financial Crisis
US House Prices – 2007-2008 Financial Crisis

Stocks and poor risk pricing – Prior to the economic crisis, investors were unable to get the exact value of risk they would be bearing when taking up stocks or financial assets from the traders. Risk pricing or the cost of risk is implied in the rate of interest charged and the investors, with poor risk profile of certain assets in the market, would not know the value of the risk assumed when buying stocks or the value of risk exchanged when selling stocks (Amadeo, 2010; Williams, 2010). The market participants were thus inaccurate in their risk analysis due to the complex financial system and innovations among other factors such as ignorance and deceit from the traders themselves.

JP Morgan is quoted as selling and quoting the risk price of CDOs at a price way lower than the market price due to lacking accuracy or information as is contrasted to stable prices in a perfect market, where market information is publicly available, as per the Basel accords. In a similar risk pricing error and crisis, the AIG had to be taken over by the American government, settling about 180 billion US dollars from the tax payers’ money because AIG had taken premium guarantees to pay several CDS obligations to many lenders of small and global parties, whose risk profile was then uncertain to the lender and insurer and plunged the institution into near bankruptcy (Amadeo, 2010).

There was then no clear model of ascertaining the level of risk assumed by a guarantor or a borrower given the dynamic and complex financial innovations of the time and the slowly growing financial academia, practice and experience within a span of two years, that is, between 2007 and 2008 (Jickling, 2009).

The Role of The Federal Reserve in the 2007-2008 Financial Crisis

The Federal Reserve and liquidity – The Federal Reserve is the lender of last result to banks and thus, is the only last savior in a financial crisis. However, the reserve faced inadequate cash to lend to banks with the rapid mortgage and loan processing witnessed alongside booming borrowing and house financing by banks and financial institutions. Commercial banks couldn’t afford adequate liquidity to finance their obligations and the large sizes of mortgages they were buying.

In the same time, the price of commodities and especially minerals such as oil and copper were growing at an unsustainable rate, with most of the minerals being imported from outside. The rise would give the impression to traders that it was an opportunity to invest in the appreciating metals and thus, there was a general cash outflow from the US in exchange for metals and gems, which saw increased trading lead to a decline in the prices thereof and a general loss of cash from the American economy to oil producing and mining countries such as the middle east nations. The cash inflow into the US was less than the cash outflow and commercial  banks would earn less than they were paying as cost of leveraging. This Federal Reserve with less inject into the economy to facilitate liquidity among the commercial banks (Jickling, 2009).

Excessive leveraging by banks – Before the 2007-2008 financial crisis struck the market, banks and other institutions in the mortgage and finance sector had used massive leveraging, that is, using credits and other derivatives to acquire assets. Leveraging shifts the risk of lending to the leveraging institution, thus removes the risk adverseness of a financial institution. They trade with appetite for risky investments which they perceive are most productive. The state of affairs with the highly leveraged financial institutions, therefore, led to risky deals which ultimately led to high rates of defaulting. Also, a major contributor to the 2007-2008 financial crisis.

The high level of leveraging, also, exposed the banks to massive risk impact should a financial downturn result and when it did with the bursting housing prices balloon, the financial institutions came crumbling down, leading to a global and all-sector financial crisis with little identity as to which institutions were in bankruptcy (Amadeo, 2010). This was as a result of a complex system of financial derivatives and contracts that were difficult to determine given the limited financial information then available (Jickling, 2009).

Predator lending – Another factor that contributed greatly and grossly to the financial crisis of the time was the deceitful predatory lending by financial institutions. The institutions would entice borrowers or mortgage buyers with appealing interest rates and have them commit to the mortgages even when such a commitment had hidden charges or adjustments (The Economist, 2010). A common practice involved the use of very low interest rates to hook up people after financing. Upon the completion of the mortgage, the client would realize later that the mortgage was an adjustable one with rates rising gradually to almost double the value they borrowed.

Many would end up unable to pay back the commitments and have their mortgages seized or have to deal with a negative amortization mortgage (McLean & Nocera, 2010).  In one case, the California attorney sued Countrywide Financial for fraudulently enticing borrowers in to a bait-and-switch conman mortgage with expensive mortgage payments (The Economist, 2010). With the falling housing prices, the home owners with outstanding mortgages were demotivated to pay their dues against the devalued prices of their mortgages, leading to massive defaulting and a financial crisis in the industry (Jickling, 2009).

Poor underwriting practices – Another factor that led to the ultimate onset and peaking of the financial crisis was the poor underwriting practices by intermediaries, banks and even insurers. Regulations require that a loaning process should follow the loaning institutions documentation guidelines and the underwriting process ought to be understood in depth to avoid unforeseen difficulties or illegalities. However, the pre-crisis period was characterized by rapid underwriting processes with little or no attention to the lender’s procedures and rules of engagements.

Loans and mortgages would be processed with little or no official documentation completed as per the issuers rules of engagement, which would lead to borrowers being subjected to terms they didn’t sign for or they were unaware of, high defaulting rate by loan holders and selling of loans without full disclosure as to the terms attached to such loans (Greenberg & Hansen, 2009; Amadeo, 2010). At the end, the victims would be realized as unable to honor their commitment due to the inflated loans, some of which would never be recovered.

In this saga, about 1600 mortgages bought by the mortgage firm Citi from mortgage dealers were found to be defective and unenforceable while the mortgages had been passed on from the dealers to the banker. The poor and fraudulent underwriting process therefore contributed immensely to the financial crisis in which banks couldn’t provide financing as they had too many commitments to honor, alongside the housing crisis (Jickling, 2009).

2007-2008 Financial Crisis, in conclusion, this paper asserts the academic and scholarly authority that the largest and longest financial crisis witnessed post the great depression era was as a result of structural factors such as the easing on monetary policies, excess risk assumed by banks, excessive borrowing of cheap but risky loans by consumers, the fall of the US Housing Market, poor risk profile on stocks, the federal budget deficit, over leveraging by banks, predatory lending, poor underwriting and the Federal Budget Deficit. These factors made many banks and institutions to collapse.

References

Amadeo, K. (2010). “2008 Financial Crisis: The Causes and Costs of the Worst Financial Crisis Ever Since the Great Depression.” The Balance.

Greenberg, R., & Hansen, C. (2009). “If you had a pulse, we gave you a loan.” NBC news.

Jickling, M. (2009). Causes of the 2007-2008 Financial Crisis.

McLean, B., & Nocera, J. (2010). All the devils are here: unmasking the men who bankrupted the world. Penguin UK.

The Economist (2010). “Predatory lending: let’s not pretend we don’t understand how it worked.”

Williams, M.T (2010). Uncontrolled risk: the lessons of Lehman Brothers and how systemic risk can still bring down the world financial system. McGraw-Hill.

Relevant Posts

Behavioral Finance Financial Decision Making

Finance Dissertation Topics

Did you find any useful knowledge relating to the 2007-2008 Financial Crisis in this post? What are the key facts that grabbed your attention? Let us know in the comments. Thank you.