A dissertation proposal is a document that presents a plan for a dissertation to reviewers for evaluation. It is actually a road map showing clearly the location from where a journey begins; the method of getting there and the destination to be reached at. The purpose of the dissertation proposal is to:
Present the problem to be researched and its importance.
Give an idea to instructor about how you will proceed in your dissertation.
Suggest the data necessary for solving the problem and how the data will be gathered, analysed, and interpreted.
A proposal is also known as a work plan, prospectus, outline, statement of intent, or draft plan. It tells us:
What will be done?
Why it will be done?
How it will be done?
Where it will be done?
For whom it will be done?
What is the benefit of doing it?
Dissertation Proposal Format
It should include:
Name & ID of the student
Name of the university
Date of Submission
Table of Contents: List the important headings and sub headings in the dissertation proposal with page numbers.
Chapter 1: Introduction
Introduction/Background: Introduce the specific problem you wish to investigate. Describe briefly the background i.e., the impact and implication of the topic/issue on the environment (the specific set up in which you are studying the issue). It should be well elaborated. It is advised to include current facts and figures in background. You should also explain it in context with the work already done on the topic. It should provide all the necessary initial information so that the reader can better understand the situation under study.
Objectives: State the objectives/goals of the research, keeping in mind the following points:
These should state the purpose of the dissertation
These must be based on logical facts and figures
These must be achievable within specified timeframe and parameters
These objectives should be presented such that these should facilitate the reader to locate various important points in the research work
The specified objectives should be clearly phrased in operational terms specifying exactly what you are going to do, where and for what purpose?
At the end of the study, objectives must be assessed to see if they have been met/achieved or not
Significance: It lays down the importance or potential benefits of your dissertation. It specifies how your study will improve, modify or broaden presented facts in the field under exploration. Make a note that such improvements/ modifications may have significant implications also. When you are taking into account the importance of your study, pose yourself the following questions:
What will be the outcomes of this research study?
Will the results of this research contribute to the solution or development of anything related to it?
What will be improved or changed as a result of the proposed research?
How will results of the study be implemented and what innovations will come out?
Problem Statement/Research Question: It describes the main issue or area to be investigated. The problem is usually represented by the research questions. Research questions are very crucial because research is about finding out what may not be known. Poorly formulated problem/question will lead to pitiable research that’s why researcher must know the question he/she would like to find answer for. The following aspects are important while formulating a problem statement/research question:
A problem statement/research question should be researchable, clear, logical, specific, precise and brief yet comprehensive statement, fully describing the issue under study.
The research problem should be grammatically correct and completely convey the main idea to be investigated.
Chapter 2: Literature Review
A literature review is citation/quotation of a comprehensive review of the published work from secondary sources (journals, research papers, etc) of data in the areas of specific interest to the researcher according to the problem/issue of research. The purpose of literature review is to ensure that:
Important variables that are likely to influence the problem situation are not left out of the study
A clear idea emerges as to which variables would be the most important to consider.
The problem statement can be made with precision and accuracy.
Note: It is important to cite at least 30 findings of researchers in the literature review.
Conceptual/Theoretical Framework can be best narrated with the help of sophisticated diagrams mentioning the independent and dependent variables and their causal affects and final outcomes. The main headings in the theoretical framework are:
Inventory of variables
Direction of relationship
Explanation of established relationship among variables
Inventory of propositions in a sequential order
Hypotheses (Formal statement that presents the expected relationship between an independent and dependent variable)
Hypotheses are the tentative statements that should either be acknowledged or rejected by means of research. Hypotheses give structure and direction to the research. Therefore, care should be taken not to oversimplify and generalize the formulation of hypothesis. It is not necessary that the research should consist of only one hypothesis. The type of problem area
investigated and the extent which encircles the research field are the factors which determine the number of hypotheses to be included in the study. It is formulated when the researcher is totally aware of the theoretical and empirical background to the problem. There are two types of hypotheses “Null” and “Alternate”. Generally, the null hypothesis is used if theory/literature does not propose a hypothesized connection between the variables under study; the alternative is generally reserved for situations in which theory/research recommends a connection or directional interplay.
Chapter 4: Research Methodology
The methodology section should portray how each specific objective will be achieved, with enough details to permit an autonomous evaluation of the proposal. The technical procedures for carrying out the dissertation must be explained in a manner appropriate for the reader. This section should include:
Research Design: Is the study exploratory, descriptive, or explanatory? Why does this particular design suit to the study?
Data Collection Sources: Describe all the sources that will be used for data collection.
Data Collection Methods:
How will the primary data be collected i.e. survey(s), experiment(s), observation(s) etc.? Is it possible to use multiple methods? If yes provide justification.
What is the target population?
What sampling frame will be used?
What type of sampling technique will be used?
Data Collection Tools/Instruments:
Which tools will be used for data collection (i.e., Questionnaire, Structured Interviews, Observations, etc) according to the need of the dissertation?
Why a particular tool is selected?
Is it possible to use multiple tools? If yes provide justification.
How will the data be collected?
How will the quality control be assured during data collection?
How will the answers about practical issues be answered? For example, if you are going to carry out survey, then think about where and for how long this survey will be carried out? Will organizations (specify names) provide you access (physical, time, documents, etc) to what you need for your research?
Data Processing & Analysis: (Methods you will use to extract and process the information you will gather)
How will the analysis be carried out?
Scoring scheme/scale and the statistical methods that will be applied for the analysis of data should be described.
Which software package (MS Excel, SPSS, etc.) will be used for data entry and analysis?
A bibliography is a list of source materials on a particular subject. In a formal report, it includes books and other library materials which have been consulted in preparation of the project. As part of the reference matter, it follows the appendices.
Real estate meltdown, otherwise referred to as the ‘housing bubble’ refers to a period where housing prices decline across the United States further leading to the financial crisis of 2007-2008. These lead to fears that the country was headed back to a depression similar to the Great Depression of 1945. There has been a lot of explanations as to what lead to the housing meltdown, but the real question is, could it be possible that the crisis could have been avoided? My aim purpose in this paper is to bring into discussion why the disaster should have been averted and if we are at risk of facing it again.
To arrive at my goals, first, we would have to look at what were the causes of the housing meltdown in the period 2006- 2008. With the crush of the housing market, economist and financial pundits came up with many explanations as to what lead to the bubble some of which the extent of their impact yet to be determined. It is therefore important to realize that a single factor did not cause the decline in the housing market but a number of them together.
Decline In Mortgage Interest Rates
In economics, we learn that when the rate of savings is low, interests tend to go high. However, that was not the case during the housing bubble. Mortgage rates were little despite saving rates being low mainly because of saving getting into the US economy from outside countries like China and Japan. According to Bernanke (2009), the net savings from outside the country increased from an estimate of 1.5% of GDP IN 1995 to 6% as of 2006.
With the aim of making better returns from investment at lower risk, investors moved from US government securities to mortgage-backed securities that Freddie Mac and Fannie Mae issued. They receive sponsorship from the government, and investors expected that in case of unforeseen circumstances the government would bail the two out. Hence they were low risk.
In addition to a real estate meltdown, mortgage-backed securities received better ratings issued by one of the best rating agencies like Standard & Poor. With the housing prices rising, low mortgage interest rates had a hand in the housing bubble by enabling more house buyers to afford to pay their monthly dues. According to Robert Schultz, increase in the speculation house prices is what made the prices to go up steadily. Speculators did purchase housing at a lower cost to sell when prices go up. With regards to the 2001 period of recession, the Federal government lowered interest rates to keep the economy going. Hence the second cause of housing meltdown.
Reduction In Short-Term Rates of Interest
From 2002 to 2004 the government lowered interest rates with the aim of making a recovery from the earlier recession of 2001. It affected the housing bubble because with the constant rise in the housing prices while household income is remaining steady, homeowners were not able to afford a payment of their mortgage loans at current rates and therefore resorted to adjustable mortgage rates which were preferable at the moment. However, when the rates began to rise, this proved to be unmanageable for homeowners.
The other way was because of leveraging, that is where loans borrowers used their borrowed money to invest. It, therefore, encouraged mortgage lending and thus increasing house prices. When the housing bubble hit, the high level of leverage that was present in the economy worsened the decline in the cost of housing.
Lack Of Strict Rules on Issuing of Mortgage Loans
During the period after the recession, the government of President Bill Clinton did not impose strict rules to financial institutions on the issuance of mortgage loans with the aim of increasing the number of homeownerships. With a reduction in the mortgage fees increased competition among mortgage loan issuing firms and therefore they had to relax their standards to obtain their market share. By the fact that there were securities issued on mortgage loans prompted this.
An increase in subprime mortgages which refers to mortgages issued to persons that were likely to default demonstrates this. Although these kinds of mortgages charged higher rates due to the risk, the all practice was not worth it.
The fact that everyone believed that prices of housing would go up did contribute to the bubble. According to Robert Shiller (2005) in his book ‘irrational exuberance’ which refers to high levels of speculative fever had indeed added to the housing bubble. From house buyers, mortgage lenders, rating agencies to even the government, none of them ever imagined that the prices of housing would ever go down.
Why The Housing Bubble Could Have Been Avoided – Real Estate Meltdown
From my analysis on the factors that lead to the housing, all the above factors did contribute to the bubble, but the main factor being the pointless speculation that the housing prices would continue going being on the rise and there was no reason to suggest otherwise. That is why, according to Robert Shiller (2005) irrational exuberance in any price bubble is difficult to notice, very hard to prevent and neither is it of value to avoid.
However, this could have been avoided if only the players who took part in the excessive speculation of prices of housing could have thought otherwise. The belief by credit rating agencies and foreign investors that prices of house in the US would go up was the primary factor that encouraged mortgage interest rates to remain so low. This notion also leads to a rise in the level of leverage experienced in the economy. The reason being low-interest rates encouraged borrowing for investment on housing with prospects of making good returns upon prices increasing.
I also think that the government regulatory agency should have regulated the constant rise in prices of housing. It could have lowered the speculation and hence chances of leverages being experienced in the economy being minimal. A control of Investment banks and mortgage issuing agencies was necessary. There is supposed to be a set of rules to be followed. Without rules there are chances of thing running out of control.
I also believe that then the government did not receive a better monetary policy to adopt. With the constant rise in prices of housing, it was not in the best interest of the federal government to lower bank rates to increase the number of house owners. The pricing of housing is similar to any other item, the law of supply and demand applies.
The Possibility of Another Bubble Leading to Real Estate Meltdown
There has been a lot of speculation in the media that we are about to experience another housing bubble just a decade after the last occurrence that leads to a financial crisis in the economy. Currently, the average cost of purchasing a house is quite high compared to ordinary income. It is one of the aspects that we need to watch out. Mortgage lenders on the other side are much strict, and real estate investors have hard time to make sells with housing staying as long as three months without being sold. There is a belief that prices of housing are going to return to normalcy the moment investors having high desires of making good returns leave the market. It is only a speculation, but with the current state of house prices I cannot rule out the possibility of another bubble.
Real estate is indeed a venture that is quite rewarding, but this may change as we have seen from our discussion on the events of house prices bubbles. However, I believe that prevention was needed long before it occurred by required government policies through regulating the prices of housing. Indeed it is necessary not to leave any stone un-turned since we cannot rule out the chances of another bubble. We can learn from its experience and be ready to prevent its re-occurrence.
Holt, Jeff. A summary of the primary causes of the housing bubble and the resulting credit crisis: A non-technical paper. The Journal of Business Inquiry 8.1 (2009): 120-129.
Schwartz, Herman M. Subprime nation: American power, global capital, and the housing bubble. Cornell University Press, 2009.
CQ Press Research, ‘Mortgage Crisis and Real Estate Meltdown ’ (Nov 2007); 926-927
Gramlich, Edward M., and Robert D. Reischauer, Subprime Mortgages: America’s Latest Boom and Bust, Urban In-stitute Press, 2007.
Cheng, I. A., Sahil Raina, and Wei Xiong. “Wall Street and the Housing Bubble: Bad Incentives, Bad Models, or Bad Luck?” University of Michigan mimeo, April (2012).
Title: Computer Science Ethics. As time progresses and moves forward the human race is always maintained in the mindset and zone of development. Humans are always seeking to reach higher grounds in every field unleashing a new dawn on the planet with the future realm. Technology is one of the main things that progress frequently with new fields being introduced day by day which helps the planet understand a lot of different things about our universe together with easing our daily work and routine. Ever since the introduction of the machinery to our planet humans have been fond with the fact that some physical object can carry out the same work intended for us without having to worry one bit about the outcome or what is happening in the process.
Times kept progressing and the reality has changed to the virtual world where humans focused on creating means to envision the unknown and to achieve your work in a virtual world called the internet which is now the most commonly used technology in the world. However, one of the main tools that helps in making all of this possible is the computer science and software engineering fields as a whole.
These two professions or fields are the essence and core of the machinery and the virtual world even though they are staged on what they call “the backend” of the whole project. A computer scientist is the person who developed lines of code, in a certain coding language, that when put together can form a functional asset.
This asset can be a software, a program, a website or even something as big as the internet. It should be stated that the most wanted jobs on earth through the past 5 years and the next 10 years is going to revolve around computer scientist and software engineers as this is current century we live in and this is the leading technology for humans at the moment leading for more need of computer science ethics. The programmer holds the power to shape the software as intended together with collecting personal information from the users and can easily track the tasks being conducted by the users as well. This gives them a huge room and deal over owning a software, service or even a website as they hold many priceless data.
There might be millions of software engineers out there in the world with high paying jobs and capabilities that can rule this world. However, with great power comes great responsibility as there are many different setbacks and hard situations that such people face while carrying out their job which might change their whole life. This is due to the fact that such jobs have the opportunity to collect any type of personal data needed from the users of the particular product they are developing.
Computer Science Ethics and the British Computer Society (BSC)
As a simple example, Facebook has a database that saves your e-mail, phone number, interests, likes, photos and almost everything you are interested in. Imagine what would happen if fakebook gives this information about you to a certain person; it kills all the privacy you should have as a person. This leads us to the most controversial topic related to software engineering which is the different ethical issues that circulates the profession as a whole. These kinds of ethical issues can be met and faced in various different forms which are hard to keep track off and should be respected at all times to protect the information of users and other stakeholders.
The main basic mean that helps a programmer in maintaining healthy ethics in this job would be the British Computer Society code of conduct (BCS) which is a book that holds the basic rules and ethics of being a programmer together with the dos and don’ts that should be carried on. Furthermore, it states what would be the consequences of breaching the code of conduct in full details helping the programmer understand the importance of the ethics.
This code of conduct has four main points that forms the ethics that should be followed together with somehow detailed information of each point. The first main point stated in the BCS is to respect and have regards towards the public interest of the society. This would mean to respect certain points like public health, privacy, rights of third parties and to never discriminate between people while conducting the work.
The second point to look out for in the BCS is to focus on the professional competence and integrity. This point holds within it the guidelines on which you should accept a task or not, based on your skills and knowledge. Furthermore, it speaks out to the professional work frame that you should always have like delivering work on time and always accepting criticism no matter what. This point might be common between any code of conduct in the world regardless of the profession. The third point included in the BCS revolves around the responsibility that the programmer has towards the authorities. This point is important as it assess the fact that programmers should not get involved in tasks that might affect any authority out there and should respect the laws in that certain place.
Furthermore, it speaks out generally on respecting the authority that you are working for prohibiting you from sharing the information to a different authority or causing a conflict between authorities. Last but not least, the BCS tackle the point of the programmer’s responsibility to the profession itself. This would mean to be effective and efficient at all times and try to show good case practices to other people in order to promote and encourage outsiders to join the software engineering society.
This BCS should be kept in mind at all times by any programmer no matter how big or small the task is without any exceptions as following the guidelines stated above is the key to a healthy and ethical work journey. On the other hand, not compelling with the states guidelines of the BCS would led to severe legal issues and might probably stop you from being able to work again in this field.
This code of conduct forms the general frame of situations that might be faced during the conduction of your profession together with a general idea on how to take decision regarding accepting jobs. However, it does not include every single event you might face in your life as a programmer which forms a problem for software engineers especially in this modern era due to the different small loopholes that exist in this field. This has led many different programmers to be confused regarding their decision making and the extent to which they can carry on their job. Other reasons why programmers break the code of conduct would be different incentives that might be introduced by the client, which is usually the case with money.
Yet, let’s move forward and discuss the different and most common situations out there which modern programmers face regarding maintaining their ethics. To begin with, the first main faced problem by programmers, as stated before is the fact that they have log files. These log files are records of literally every single step taken by the users of the certain software which can act as a tracker to the users. These files are important to help the programmer debug the system if it faced any type of problem.
However, having these files would mean that the programmer can see exactly what is being done by the users, thus it kills any privacy the user has the right to have.An example of a business that deals with log files in a smart way is Snapchat. Snapchat deletes log files the second it has been dealt with by the intended user, which from a programmer’s point of view is a very weak system. Yet, the users have loved and respected the idea of the privacy and the system that forgets easily.
This might not seem as a big deal when you look at the smaller picture, but you need to understand that these types of personal information in the wrong hands might affect the users in many different ways negatively. Imagine the insurance company gained more information about the smoking situation of a certain client, they will be sure to increase the rates in no time.
The second ethical issue faced by programmers worldwide is the fact that all the different clients out there are always looking to suck up money out of the pockets and wallets of users as a whole. This forces some programmers to change their business model from a service to an ATM machine that only allows depositing the money.
There is generally no free service on the internet, but actually the non-free service is already ready but it is being promoted for free in order to attract the highest number of users and then slowly charging fees for the same service that was free a week ago.In order to prevent any problems from happening, programmers should include a tiny timeline which shows when will the charges be set on this product in order to absorb any shock that will affect the people. This dilemma generally evolves to a bigger problem which is the fact that most business out there do not earn enough money from their online services if they keep it for free.
This may lead to a model being conducted by the firm that allows the sharing of the user’s information in order to make more money from interested partners. This is a dilemma due to the fact that the programmer should achieve their job by fulfilling the needs of the client while still maintaining the right privacy for the users and not breaching the code of conduct. This forces the any software engineer to focus on what is asked of him and to ensure that such situations are communicated clearly to the users beforehand in order to rationalize the problem.
Moving on the third point and a very important point to keep in mind while programming, is the concept of protection and the different layers of security that should be available and included over the user’s data. There is no doubt that involving protection is a must over the programmer to be able to protect the data of the users of the certain software.
However, the problem arises when the programmer keeps asking himself if this is enough protection or should there be more layers to protect the user and their work. If a programmer does not provide the right protection for the users of his program then he is definitely violating the BCS and is not showing right work ethics. However, the point at which enough protection is subtle is relative and different from one person to the other and from one task to the other. This forms the problem as in moments a programmer can claim that he has done enough to protect the users rights yet a user can claim that they need more privacy.
Furthermore, the problem in adding more security layers is weak and slow performance that will be obtained from the service. Hence, a solution like double-encryption for example, might not be the best idea out there even if it is suitable for the users themselves. To make matters worse, any mistake or problem in the final algorithm will stop the whole system and can only be undone by restarting the whole algorithm and system.
Adding to much security to a service, limits the programmer from adding tons of features that would take the product to the next level thus some programmers might not care about security as much as they would care about adding more modern features to their users.
Moving on to the next ethical issue faced by programmers which is the decision on whether to fix bugs in the algorithm or not. You might ask yourself “why wouldn’t the programmer fix the bugs in their own system?” Well, you need to understand that most bugs are too hard to understand and analyze and even harder to be solved as it needs efforts to go through the whole algorithm and deep thinking on the method of solving.
This fact causes tons of different bugs worldwide left unhanded by the engineers as it is not of a great concern. However, is it ethical to not solve all bugs that arise and leave a few problems met by the user as is? This is where the programmers get stuck and have different point of views. Yet, with all the contradicting point of views, the scenario most of the times turns out with no action taken because there is no definitive action to actually take. You, as a programmer, are obliged by the BCS to always do your best and solve any problem that arises according to the professional ethics law.
However, it is not always the case nor is it always solvable by humans, which is what programmers consider as an exceptional case. The dilemma keeps growing on the fact that the size of the problem is only relative to the software engineer himself and may differ from one person to the other, showing no clear guidelines on when to leave bugs as is and when to start taking a step to solve the problems.
Another different ethical problem would be the range of algorithm expansion. This is quite similar to the past two points, where having more lines of code increases the chances for facing more problems and aiding individuals to misuse the algorithm in many different way. It is truly not the responsibility of the programmer at all if other people misuse his software or algorithm by any mean however, he should at least do his part in providing the right algorithm with the best efficiency.
A tiny example would be the laptop’s camera which has an LED associated with turning on the camera as an indication. However, if you surf through the algorithm of this system, you might find a loophole to help you decouple the LED from the camera allowing you or anyone else to hack the device and spy on the target. The challenge here would be for the programmer to anticipate the different problems that might be met through their software and try to find an alternative to coding which can solve the problem. The solution here was the oldest type of camera which was the shutter camera which had a physical gate blocking the lenses which can only be removed by the user of the laptop.
The last two main problems worth mentioning that every programmer can relate too easily is the data requests dilemma and the forces by the nature of the internet. Concerning data requests, it is always relative on to which extent should the programmer defend the customers. For most website and service that works by collecting data (during the sign up process) there comes a time where your company will be asked to provide or sell this data to the government or any other interested party. Here rises a huge conflict between compiling with what is asked from a legal entity and preserving the privacy of your users.
The problem is the fact that it is almost impossible to go toe to toe with the legal entities or the governmental authorities as the process will take too much effort and probably all of your funds. Usually companies that go through this situation tend to leave the firm or just comply as it is always hard to do anything else. The second point for concern is the method of interacting with the nature of the internet as this modern invention is full of hassles. Simply, as an international law student would be aware, each and every country has its own set of laws. Laws might be almost similar but only between countries of the same level.
Trying to develop a software or a service might mean that the initial privacy and ethical standards that a programmer would provide would be enough to suite only the needs of the users of the local country concerned. However, when this product expands, the programmer is stuck, not knowing whether to set terms relevant to this country or the other one. Furthermore, there might be a lot of collisions between what is needed from one government and what is not essential for the other. Thus, it is impossible to have a software or algorithm that can fit all the different standards and it’s always a hassle to find the right guidelines which will help you benefit as much people as possible.
These were just a few of the many different situations, problems, hassles and loopholes that are likely to be face and common between all software engineers and programmers worldwide. With this being said, it is important to find the right way to answer and assess such problems as there should be a solution taken to solve for all this.
It is important to consider and keep in mind the ethics of work and code of conduct at all times while trying to perform the job and while taking decisions in order to at least tone decrease the margin for any issues that might arise. The best practices out there is to always understand who are the audience for this certain product on the long run, hence trying to always set your algorithms initially to fit all standards. Furthermore, a good programmer should always have a clear look towards the future of the product and the development that might arise.
This is because development of a product is usually the factor that starts the breakdown of certain rules and ethics. It is important for the programmer to always include in the contract biding between them and their client, firm clauses that stops any future amendments to change important security features. The main point behind the ethics and the code of conduct is to mainly preserver the privacy of the users together with the credibility and integrity of the software engineer. The best advice out there to any programmer is to always be open and honest to the users and clients. Being open with all the problems and the different under staged deals that are being communicated by other firms or governmental entities. Having the people observe what is happening is a key that will have all your users aid you and stand beside or just accept the terms as they know how you are bounded.
Regardless of the presence of a solution or not, it is a must over all software engineers to respect and follow the BCS code of conduct at all times as the oath they swore is of important honor. However, speculations by time has been surrounding the code of conduct due to outdated clauses that it holds and a lot of people has suggested a few changes to the code of conduct which will help more specific ideas be tackled rather than just a general vague concept. Different conferences have been held every once in a while in order to discuss these ideas and quite few minor changes has been accompanied as a conclusion and this is something we are much more looking forward to. The best part is that the process of amendment takes place in certain overseas conferences that holds hundreds of worldwide software engineers. This would mean, that your voice as a programmer will be heard and what you have been put through can actually be prevented from happening again.
As a conclusion, there are a lot of different problems and hard-ons that a software engineer might face though out their professional career despite the amazing perks that this career might provide. As stated earlier, with great power comes great responsibility and it is important for all employees out there to show respect towards the responsibility put upon them. The setbacks can range from personal decision-making situations where the programmer has to choose whether to put more effort in a certain task or not, while other situations might be forced by external factors whether it’s the client, other buyers or the government itself.
Such external factors are harder to deal with, however there should be great precautions taken by the engineer even before the problem arise which can at least give them a starting push when in face of trouble. There should be no exception at all times to break the code of conduct no matter what the case is and this is what is being taught to all graduated programmers as it is the essence of a successful work life.-
Botting, R. J. (2005). Teaching and learning ethics in computer science. Proceedings of the 36th SIGCSE technical symposium on Computer science education – SIGCSE 05.
Kizza, J. M. (2016). Computer Science Ethics and Ethical Analysis. Ethics in Computing Undergraduate Topics in Computer Science, 17-38.
Miller, K. (1988). Integrating Computer Science Ethics into the Computer Science Curriculum. Computer Science Education, 1(1), 37-52.
1981 British computer society conference. (1981). Computers in Industry, 2(4), 311-312.
British Computer Society. (n.d.). International Year Book and Statesmen Whos Who.
Alpert, S. A. (1996). Doctoral essays in computer ethics. Science and Engineering Ethics, 2(2), 225-247.
The ACM Code of Ethics and Professional Conduct. (2004). Computer Science Ethics Handbook, Second Edition CD-ROM.
Sillars, M. (2002). The British Computer Society industry structure model. IEE Seminar Technical Competence Frameworks – seeing through the fog.
Bynum, T. W. (2000). Special section on computer ethics. Science and Engineering Ethics, 6(2), 205-206.
Quinn, M. J. (2006). On teaching computer ethics within a computer science department. Science and Engineering Ethics, 12(2), 335-343.
If you enjoyed reading this post on Computer Science Ethics, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.
Reliability and Validity in Research – This is a statistical concept in the field of research whereby a particular phenomenon is being evaluated or rated by various raters. It is, therefore, the extent or degree to which there is an agreement in the rating scores amongst the multiple raters, which brings about homogeneity and unanimity amongst these raters. To measure inter-rater reliability, it entails taking the total number of ratings in a particular judgment conducted, as well as the counting the accumulative ratings are done in the rating exercise. The total number of agreements is then divided by the total number of ratings and converted into a percentage to give the inter-rater reliability. McHugh (2012) provides a good example of how inter-rater reliability is calculated by reviewing the various methods that have been stipulated by scholars previously.
This is also another reliability aspect. Test-retest reliability is the extent or degree to which results obtained from a particular test (which is similar) and consistent over time. In test-retest reliability, a similar test is administered to the same people in two or more instances and then the results are evaluated. To measure the test-retest reliability, there are two primary formulas applied. The first formula, which is better applied in instances where two tests were conducted in the Pearson Correlation formula that tests how well two sets of data correlate.
The other method is intraclass correlation formula that is applicable where more than two tests were administered. These formulas help calculate the test-retest coefficients that range between 0 and 1. In his article on validity and reliability in social science research, Drost (2011) provides the various reliability and validity aspects and gives detailed examples of the test-retest reliability measurement.
Face validity, which is also referred to as the logical validity, entails the extent or degree to which an evaluation or investigation intuitively seems to quantify or measure the variable or rather the theory that it is objectively meant to measure. This, therefore, means that face validity is when a specific evaluation or assessment tool does what it is meant to do to provide results. To measure face validity, one can engage in the assessment of the concepts or ideas to be measured against the theoretical and practical applications.
This is the measure of how accurate or effective a given value from a research study is and can be used in the future or rather to predict future patterns in the field studied. In their research on the predictive validity of public examinations (Obioma & Salau, 2007) use the predictive validity aspect to predict how the performance of students in public examinations will affect their future academic performances in the university and college level.
Concurrent reliability and validity
This entails the degree to which current test results relate to results from a previous test. For instance, if in the measurement of an individual’s IQ test are taken at two varied intervals, concurrent validity is measured through comparing on how closely similar are these results from the two tests. A good example of research that has employed the use of concurrent validity is the research done by (Tamanini et al., 2004) on the Portuguese king’s health test performed on women after stress. The researchers indicate how this test is applied and measured by using it as their primary test in their research.
Addressing the issues of reliability and validity
On most qualitative researchers, the nature of the data is more important to the researcher than the other descriptive elements of the research. This, however, does not rule out the need for conciseness in the descriptive sections. Reliability in research entails the concerns the stability, consistency of the data as well as homogeneous repeatability of the results if several tests are done (LoBiondo-Wood & Haber 2014). On the other hand, validity entails the accuracy and integrity of the data or results collected from the various tests that a researcher performs. Various researchers address these issues of validity and reliability in different ways, based on the purpose and the kind of research they carry out.
The authors, Obioma & Salau, (2007), go down to research on the effects of public examinations on the future academic performance of students. The focus here, therefore, is more on the data validation to ensure that their conclusions, as well as the outcomes of the results, have the required accuracy and integrity to validate their arguments. The two authors and researchers have applied the aspects of predictive and concurrent validity in their research. In regards to the use of predictive validity, this is where their research question is based on.
They have made sure that the data or the arguments that they bring forth as substantially valid and convincing to attain the objective of predicting the future academic performances of the children who undertake the public examinations that are governed by the various bodies in the country. They have however not applied any reliability aspects in their research. At least not anyone that can be easily identified.
In the book by Drost, he has touched on both aspects; validity and reliability. In this book, he has not presented it in a research form but rather brought it out to the readers in the form of a review of both aspects of research, but on the dimension of social sciences. For instance, she has covered the various instances of both validity and reliability, by providing real-life examples and the various methods that can be used to measure the respective instances of both aspects. She approaches the concepts of validity and reliability from a general perspective whereby she accounts for the reasons as to why researchers, especially in education and social sciences, should adopt a culture of ensuring validity and reliability in their results. He explains the various instances of reliability and provides formulas and tools that can be effectively applied to measure these instances. She also provides the various elements that can impact the level of validity and reliability of data or results in research.
In conclusion, the concepts of validity and reliability are important in research. The researcher from various fields should adopt a culture of achieving these concepts in the results they obtain during their research. As Drost argues it, strong support for the validity and the reliability of research not only makes the research highly validated or otherwise believed in but also limits the possible critiques that the research may face. It fills the gaps that may be identifiable in the research. A researcher should be able to understand the various instances of both reliability and validity as well as know when it is appropriate to apply what instance in the research.
McHugh, M. L. (2012). Interrater reliability: the kappa statistic. Biochemia Medica, 22(3), 276-282.
Drost, E. A. (2011). Validity and reliability of social science research. Education Research and perspectives, 38(1), 105.
Obioma, G., & Salau, M. (2007). The predictive validity of public examinations: A case study of Nigeria. Nigerian Educational Research & Development Council (NERDC) Abuja.
Tamanini, J. T., Dambros, M., D’ancona, C. A., Palma, P. C., Botega, N. J., Rios, L. A., & Netto Jr, N. R. (2004). Concurrent validity, internal consistency and responsiveness of the Portuguese version of the King’s Health Questionnaire (KHQ) in women after stress urinary incontinence surgery. International Braz j Urol, 30(6), 479-486.
LoBiondo-Wood, G., & Haber, J. (2014). Reliability and validity. G. LoBiondo-Wood & J. Haber. Nursing research. Methods and critical appraisal for evidence-based practice, 289-309.
If you enjoyed reading this post on reliability and validity, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.
Critical Thinking and “What-If” Analyses in Management Decisions
Title: Critical Thinking and “What-If” Analyses in Management Decisions
“No problem can be solved by the same consciousness that created it.”
– Albert Einstein
“We are approaching a new age of synthesis. Knowledge cannot be merely a degree or skill . . . it demands a broader vision, capabilities in critical thinking and logical deduction without which we cannot have constructive progress.”
– Li Ka Shing
“To every complex question there is a simple answer and it is wrong.”
– H. L. Mencken
In its simplest interpretation, we all apply critical thinking in our daily lives, often without even giving a nod to the process we use to arrive at routine decisions. The common characteristics of basic decision making that we all use are so elementary: Gathering information and keeping informed about areas of interest and the particulars to be considered before arriving at a decision; asking questions to ensure we clearly understand pertinent factors; brainstorming; weighing the evidence we have gathered, utilizing a “tried and true” method we have adopted or usually rely on, and – in so doing – determining what is actually relevant to the problem or decision at hand; taking historical elements into account, but assessing facts within their current context; seeking to discern the truth of any claims or assertions, and determining if bias exists that would affect facts or outcomes.
This pattern is repeated for all decisions, from the smallest – for instance, what apparel to wear, in light of planned physical activities or appropriateness for an event – to the most important of decisions, such as whether or not to propose or accept an offer of marriage, or what university to attend.
From a more sophisticated perspective, the simple steps commonly used to arrive at a decision can be deconstructed as
Structured problem solving
Risk assessment and management
Management of thought process
Arrival at a solution and implementation
Brainstorming can help determine the appropriate framework of inquiry necessary to gather the most pertinent information, which depends, of course, upon the answers being sought. Methodology used in the problem solving process provides the structure, and there are several methods and systems that can be utilized depending on the nature and scope of the factors to be evaluated, and their relationship, if any. The broader the criteria and more interrelated the particular set of decision problems and apparent alternatives, and the more variable in number and threat level the kinds of risks to be considered, the more complicated the methodology must be in order to assimilate all pertinent information and accommodate as many options and outcomes as is possible. Once again, brainstorming is required to envision all potential perils or disruptive forces that might impinge upon the success of an entity or endeavor.
A simple outranking of one outcome above the next is a concept that provides a variety of alternatives responses and outcomes to unintended events, pairing alternatives to determine the better performing of each pair. Upon determining which alternative is more effective, or outranks the other, these assessments of problem-solving or responsive value can be aggregated into a ranking or partial-ranking scheme which, although it may not deliver a definitive answer, offers a reduced “shortlist” of acceptable alternatives.
Progressive decision-making tackles one element at a time, in order of importance, placing decisions in a sequence that comprises a plan of avoidance, attack or defense in the face of envisioned obstacles or other developments. Management of the thought process provides a discipline that enables a rational approach to even the most upsetting of possibilities, removing emotion to thereby clarify thought and enable focus. Arrival at a solution and implementation, perforce, requires that the number of likely risks and feasible alternatives be winnowed and refined, to arrive at those scenarios that are most credible, so that they may be addressed in some detail.
Decision Making Criteria
When facing single criterion or limited-criteria problems and decisions a number of relatively simple methods are available to determine the alternative offering the best value or outcome. Elementary decision tools include decision trees that sequentially branch one decision into the next in a basic “this, therefore that” progression; decision tables of alternatives, pro-con analytical comparisons maximax/maximin strategies, cost-benefit analyses. contingency planning, what-if analysis.
All are elementary pencil-to-paper analyses, simple enough to calculate manually, with no need of sophisticated mathematical skill or computational resources.
Multi-attribute optimization problems such as those that are often addressed by planning departments and larger businesses and organizations often reflect a finite number of criteria but an infinite number of alternatives that are feasible.
If you enjoyed reading this post on critical thinking and “What-If” analyses in management decision making, I would be very grateful if you could help spread this knowledge by emailing this post to a friend, or sharing it on Twitter or Facebook. Thank you.