These are some of the main principles underlying the course structure.
The product is you, NOT what you build.
-
The software product you build is a side effect only. You are the product of this course. This means,
We may not take the most efficient route to building the software product. We take the route that allows you to learn the most.
Building a software product that is unique, creative, and shiny is not our priority (although we try to do a bit of that too). Learning to take pride in, and discovering the joy of, high quality software engineering work is our priority.
Following from that, we evaluate you on not just how much you've done, but also, how well you've done those things. Here are some of the aspects in which we focus on:
We appreciate ...
But we value more ...
Ability to deal with low-level details
Ability to abstract over details, generalize, see the big picture
A drive to learn latest and greatest technologies
Ability to make the best of given tools
Ability to find problems that interest you and solve them
Ability to solve the given problem to the best of your ability
Ability to burn the midnight oil to meet a deadline
Ability to schedule work so that the need for 'last minute heroics' is minimal
Preference to do things you like or things you are good at
Ability to buckle down and deliver on important things that you don't necessarily like or are good at
Ability to deliver desired end results
Ability to deliver in a way that shows how well you delivered (i.e. visibility of your work)
We learn together, NOT compete against each other.
You are not in a competition. Our grading is not forced on a bell curve.
Learn from each other. That is why we open-source your submissions.
Teach each other, even those in other teams. Those who do it well can become tutors next time.
Continuously engage, NOT last minute heroics.
We want to train you to do software engineering in a steady and repeatable manner that does not require 'last minute heroics'.
In this course, last minute heroics will not earn you a good project grade, and last minute mugging will not earn you a good exam grade.
Where you reach at the end matters, NOT what you knew at the beginning.
When you start the course, some others in the class may appear to know a lot more than you. Don't let that worry you. The final grade depends on what you know at the end, not what you knew to begin with. All marks allocated to intermediate deliverables are within the reach of everyone in the class irrespective of their prior knowledge.
The software product you build is a side effect only. You are the product of this course. This means,
We may not take the most efficient route to building the software product. We take the route that allows you to learn the most.
Building a software product that is unique, creative, and shiny is not our priority (although we try to do a bit of that too). Learning to take pride in, and discovering the joy of, high quality software engineering work is our priority.
Following from that, we evaluate you on not just how much you've done, but also, how well you've done those things. Here are some of the aspects in which we focus on:
We appreciate ...
But we value more ...
Ability to deal with low-level details
Ability to abstract over details, generalize, see the big picture
A drive to learn latest and greatest technologies
Ability to make the best of given tools
Ability to find problems that interest you and solve them
Ability to solve the given problem to the best of your ability
Ability to burn the midnight oil to meet a deadline
Ability to schedule work so that the need for 'last minute heroics' is minimal
Preference to do things you like or things you are good at
Ability to buckle down and deliver on important things that you don't necessarily like or are good at
Ability to deliver desired end results
Ability to deliver in a way that shows how well you delivered (i.e. visibility of your work)
We learn together, NOT compete against each other.
You are not in a competition. Our grading is not forced on a bell curve.
Learn from each other. That is why we open-source your submissions.
Teach each other, even those in other teams. Those who do it well can become tutors next time.
Continuously engage, NOT last minute heroics.
We want to train you to do software engineering in a steady and repeatable manner that does not require 'last minute heroics'.
In this course, last minute heroics will not earn you a good project grade, and last minute mugging will not earn you a good exam grade.
Where you reach at the end matters, NOT what you knew at the beginning.
When you start the course, some others in the class may appear to know a lot more than you. Don't let that worry you. The final grade depends on what you know at the end, not what you knew to begin with. All marks allocated to intermediate deliverables are within the reach of everyone in the class irrespective of their prior knowledge.
A large portion of this book is based on learning materials created by the following Software Engineering Instructors for their students (in alphabetical order):
A large portion of this book is based on learning materials created by the following Software Engineering Instructors for their students (in alphabetical order):
Can identify the client-server architectural style
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
+
Client-server architectural style
What
Can identify the client-server architectural style
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
+
What
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
+
Event-driven architectural style
What
Can identify event-driven architectural style
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
+
What
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
Client-server architectural style
What
Can identify the client-server architectural style
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
Transaction processing architectural style
What
Can identify transaction processing architectural style
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
Service-oriented architectural style
What
Can identify service-oriented architectural style
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
More
More styles
Can name several other architecture styles
Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.
Resources:
Pipes and Filters pattern -- an article from Microsoft about the pipes and filters architectural style
Broker pattern -- Wikipedia article on the broker architectural style
Most applications use a mix of these architectural styles.
An application can use a client-server architecture where the server component comprises several layers, i.e. it uses the n-tier architecture.
Exercises:
Comment on how to use architecture styles in Minesweeper.
+
Architectural styles
Introduction
What
Can explain architectural styles
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
Client-server architectural style
What
Can identify the client-server architectural style
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
Transaction processing architectural style
What
Can identify transaction processing architectural style
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
Service-oriented architectural style
What
Can identify service-oriented architectural style
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
More
More styles
Can name several other architecture styles
Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.
Resources:
Pipes and Filters pattern -- an article from Microsoft about the pipes and filters architectural style
Broker pattern -- Wikipedia article on the broker architectural style
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.
Resources:
Pipes and Filters pattern -- an article from Microsoft about the pipes and filters architectural style
Broker pattern -- Wikipedia article on the broker architectural style
Most applications use a mix of these architectural styles.
An application can use a client-server architecture where the server component comprises several layers, i.e. it uses the n-tier architecture.
Exercises:
Comment on how to use architecture styles in Minesweeper.
+
More
More styles
Can name several other architecture styles
Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.
Resources:
Pipes and Filters pattern -- an article from Microsoft about the pipes and filters architectural style
Broker pattern -- Wikipedia article on the broker architectural style
Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.
Resources:
Pipes and Filters pattern -- an article from Microsoft about the pipes and filters architectural style
Broker pattern -- Wikipedia article on the broker architectural style
Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.
Resources:
Pipes and Filters pattern -- an article from Microsoft about the pipes and filters architectural style
Broker pattern -- Wikipedia article on the broker architectural style
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
+
N-tier architectural style
What
Can identify n-tier architectural style
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
+
What
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
Can identify transaction processing architectural style
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
+
Transaction processing architectural style
What
Can identify transaction processing architectural style
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
+
What
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.
Minimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.
Avoid the indiscriminate use of double-headed arrows to show interactions between components.
Consider the two architecture diagrams of the same software given below. Because Diagram 2 uses double-headed arrows, the important fact that GUI has a bidirectional dependency with the Logic component is no longer captured.
+
Drawing
While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.
Minimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.
Avoid the indiscriminate use of double-headed arrows to show interactions between components.
Consider the two architecture diagrams of the same software given below. Because Diagram 2 uses double-headed arrows, the important fact that GUI has a bidirectional dependency with the Logic component is no longer captured.
Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.
While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.
Minimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.
Avoid the indiscriminate use of double-headed arrows to show interactions between components.
Consider the two architecture diagrams of the same software given below. Because Diagram 2 uses double-headed arrows, the important fact that GUI has a bidirectional dependency with the Logic component is no longer captured.
+
Architecture diagrams
Reading
Can interpret an architecture diagram
Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.
While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.
Minimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.
Avoid the indiscriminate use of double-headed arrows to show interactions between components.
Consider the two architecture diagrams of the same software given below. Because Diagram 2 uses double-headed arrows, the important fact that GUI has a bidirectional dependency with the Logic component is no longer captured.
Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.
Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.
The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them. Architecture is concerned with the public side of interfaces; private details of elements—details having to do solely with internal implementation—are not architectural.
--- Software Architecture in Practice (2nd edition), Bass, Clements, and Kazman
The software architecture shows the overall organization of the system and can be viewed as a very high-level design. It usually consists of a set of interacting components that fit together to achieve the required functionality. It should be a simple and technically viable structure that is well-understood and agreed-upon by everyone in the development team, and it forms the basis for the implementation.
A possible architecture for a Minesweeper game:
Main components:
GUI: Graphical user interface
TextUi: Textual user interface
ATD: An automated test driver used for testing the game logic
Logic: Computation and logic of the game
Store: Storage and retrieval of game data (high scores etc.)
The architecture is typically designed by the software architect, who provides the technical vision of the system and makes high-level (i.e. architecture-level) technical decisions about the project.
Exercises:
Statements about architecture
Architecture diagrams
Reading
Can interpret an architecture diagram
Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.
While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.
Minimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.
Avoid the indiscriminate use of double-headed arrows to show interactions between components.
Consider the two architecture diagrams of the same software given below. Because Diagram 2 uses double-headed arrows, the important fact that GUI has a bidirectional dependency with the Logic component is no longer captured.
Architectural styles
Introduction
What
Can explain architectural styles
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
Client-server architectural style
What
Can identify the client-server architectural style
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
Transaction processing architectural style
What
Can identify transaction processing architectural style
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
Service-oriented architectural style
What
Can identify service-oriented architectural style
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
More
More styles
Can name several other architecture styles
Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.
Resources:
Pipes and Filters pattern -- an article from Microsoft about the pipes and filters architectural style
Broker pattern -- Wikipedia article on the broker architectural style
+-- Software Architecture in Practice (2nd edition), Bass, Clements, and Kazman
The software architecture shows the overall organization of the system and can be viewed as a very high-level design. It usually consists of a set of interacting components that fit together to achieve the required functionality. It should be a simple and technically viable structure that is well-understood and agreed-upon by everyone in the development team, and it forms the basis for the implementation.
A possible architecture for a Minesweeper game:
Main components:
GUI: Graphical user interface
TextUi: Textual user interface
ATD: An automated test driver used for testing the game logic
Logic: Computation and logic of the game
Store: Storage and retrieval of game data (high scores etc.)
The architecture is typically designed by the software architect, who provides the technical vision of the system and makes high-level (i.e. architecture-level) technical decisions about the project.
Exercises:
Statements about architecture
Architecture diagrams
Reading
Can interpret an architecture diagram
Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.
While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.
Minimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.
Avoid the indiscriminate use of double-headed arrows to show interactions between components.
Consider the two architecture diagrams of the same software given below. Because Diagram 2 uses double-headed arrows, the important fact that GUI has a bidirectional dependency with the Logic component is no longer captured.
Architectural styles
Introduction
What
Can explain architectural styles
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
Client-server architectural style
What
Can identify the client-server architectural style
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
Transaction processing architectural style
What
Can identify transaction processing architectural style
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
Service-oriented architectural style
What
Can identify service-oriented architectural style
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
More
More styles
Can name several other architecture styles
Other well-known architectural styles include the pipes-and-filters architecture, the broker architecture, the peer-to-peer architecture, and the message-oriented architecture.
Resources:
Pipes and Filters pattern -- an article from Microsoft about the pipes and filters architectural style
Broker pattern -- Wikipedia article on the broker architectural style
Avoid empty catch statements, as they are a way to ignore errors silently (which is not a good thing). In cases when it is unavoidable, at least give a comment to explain why the catch block is left empty.
+
Avoid empty catch blocks
Avoid empty catch statements, as they are a way to ignore errors silently (which is not a good thing). In cases when it is unavoidable, at least give a comment to explain why the catch block is left empty.
Get rid of unused code the moment it becomes redundant. You might feel reluctant to delete code you have painstakingly written, even if you have no use for that code anymore ("I spent a lot of time writing that code; what if I need it again?"). Consider all code as baggage you have to carry. If you need that code again, simply recover it from the revision control tool you are using. Deleting code you wrote previously is a sign that you are improving.
+
Delete dead code
Get rid of unused code the moment it becomes redundant. You might feel reluctant to delete code you have painstakingly written, even if you have no use for that code anymore ("I spent a lot of time writing that code; what if I need it again?"). Consider all code as baggage you have to carry. If you need that code again, simply recover it from the revision control tool you are using. Deleting code you wrote previously is a sign that you are improving.
Can improve code quality using technique: minimize scope of variables
Minimize global variables. Global variables may be the most convenient way to pass information around, but they do create implicit links between code segments that use the global variable. Avoid them as much as possible.
Define variables in the least possible scope. For example, if the variable is used only within the if block of the conditional statement, it should be declared inside that if block.
The most powerful technique for minimizing the scope of a local variable is to declare it where it is first used.-- Effective Java, by Joshua Bloch
Can improve code quality using technique: minimize code duplication
Code duplication, especially when you copy-paste-modify code, often indicates a poor quality implementation. While it may not be possible to have zero duplication, always think twice before duplicating code; most often there is a better alternative.
This guideline is closely related to the DRY Principle.
+
Intermediate
Minimize scope of variables
Can improve code quality using technique: minimize scope of variables
Minimize global variables. Global variables may be the most convenient way to pass information around, but they do create implicit links between code segments that use the global variable. Avoid them as much as possible.
Define variables in the least possible scope. For example, if the variable is used only within the if block of the conditional statement, it should be declared inside that if block.
The most powerful technique for minimizing the scope of a local variable is to declare it where it is first used.-- Effective Java, by Joshua Bloch
Can improve code quality using technique: minimize code duplication
Code duplication, especially when you copy-paste-modify code, often indicates a poor quality implementation. While it may not be possible to have zero duplication, always think twice before duplicating code; most often there is a better alternative.
This guideline is closely related to the DRY Principle.
Code duplication, especially when you copy-paste-modify code, often indicates a poor quality implementation. While it may not be possible to have zero duplication, always think twice before duplicating code; most often there is a better alternative.
This guideline is closely related to the DRY Principle.
+
Minimize code duplication
Code duplication, especially when you copy-paste-modify code, often indicates a poor quality implementation. While it may not be possible to have zero duplication, always think twice before duplicating code; most often there is a better alternative.
This guideline is closely related to the DRY Principle.
Minimize global variables. Global variables may be the most convenient way to pass information around, but they do create implicit links between code segments that use the global variable. Avoid them as much as possible.
Define variables in the least possible scope. For example, if the variable is used only within the if block of the conditional statement, it should be declared inside that if block.
The most powerful technique for minimizing the scope of a local variable is to declare it where it is first used.-- Effective Java, by Joshua Bloch
Minimize global variables. Global variables may be the most convenient way to pass information around, but they do create implicit links between code segments that use the global variable. Avoid them as much as possible.
Define variables in the least possible scope. For example, if the variable is used only within the if block of the conditional statement, it should be declared inside that if block.
The most powerful technique for minimizing the scope of a local variable is to declare it where it is first used.-- Effective Java, by Joshua Bloch
It is safer to use language constructs in the way they are meant to be used, even if the language allows shortcuts. Such coding practices are common sources of bugs. Know them and avoid them.
+
Introduction
It is safer to use language constructs in the way they are meant to be used, even if the language allows shortcuts. Such coding practices are common sources of bugs. Know them and avoid them.
Can explain the need for commenting minimally but sufficiently
Implementation → Code Quality → Comments →
-
Introduction
Good code is its own best documentation. As you’re about to add a comment, ask yourself, ‘How can I improve the code so that this comment isn’t needed?’ Improve the code and then document it to make it even clearer. -- Steve McConnell, Author of Clean Code
Some think commenting heavily increases the 'code quality'. That is not so. Avoid writing comments to explain bad code. Improve the code to make it self-explanatory.
+
Introduction
Good code is its own best documentation. As you’re about to add a comment, ask yourself, ‘How can I improve the code so that this comment isn’t needed?’ Improve the code and then document it to make it even clearer. -- Steve McConnell, Author of Clean Code
Some think commenting heavily increases the 'code quality'. That is not so. Avoid writing comments to explain bad code. Improve the code to make it self-explanatory.
One essential way to improve code quality is to follow a consistent style. That is why software engineers usually follow a strict coding standard (aka style guide).
The aim of a coding standard is to make the entire code base look like it was written by one person. A coding standard is usually specific to a programming language and specifies guidelines such as the locations of opening and closing braces, indentation styles and naming styles (e.g. whether to use Hungarian style, Pascal casing, Camel casing, etc.). It is important that the whole team/company uses the same coding standard and that the standard is generally not inconsistent with typical industry practices. If a company's coding standard is very different from what is typically used in the industry, new recruits will take longer to get used to the company's coding style.
IDEs can help to enforce some parts of a coding standard e.g. indentation rules.
Exercises:
What is the recommended approach regarding coding standards?
One essential way to improve code quality is to follow a consistent style. That is why software engineers usually follow a strict coding standard (aka style guide).
The aim of a coding standard is to make the entire code base look like it was written by one person. A coding standard is usually specific to a programming language and specifies guidelines such as the locations of opening and closing braces, indentation styles and naming styles (e.g. whether to use Hungarian style, Pascal casing, Camel casing, etc.). It is important that the whole team/company uses the same coding standard and that the standard is generally not inconsistent with typical industry practices. If a company's coding standard is very different from what is typically used in the industry, new recruits will take longer to get used to the company's coding style.
IDEs can help to enforce some parts of a coding standard e.g. indentation rules.
Exercises:
What is the recommended approach regarding coding standards?
One essential way to improve code quality is to follow a consistent style. That is why software engineers usually follow a strict coding standard (aka style guide).
The aim of a coding standard is to make the entire code base look like it was written by one person. A coding standard is usually specific to a programming language and specifies guidelines such as the locations of opening and closing braces, indentation styles and naming styles (e.g. whether to use Hungarian style, Pascal casing, Camel casing, etc.). It is important that the whole team/company uses the same coding standard and that the standard is generally not inconsistent with typical industry practices. If a company's coding standard is very different from what is typically used in the industry, new recruits will take longer to get used to the company's coding style.
IDEs can help to enforce some parts of a coding standard e.g. indentation rules.
Exercises:
What is the recommended approach regarding coding standards?
+
Introduction
One essential way to improve code quality is to follow a consistent style. That is why software engineers usually follow a strict coding standard (aka style guide).
The aim of a coding standard is to make the entire code base look like it was written by one person. A coding standard is usually specific to a programming language and specifies guidelines such as the locations of opening and closing braces, indentation styles and naming styles (e.g. whether to use Hungarian style, Pascal casing, Camel casing, etc.). It is important that the whole team/company uses the same coding standard and that the standard is generally not inconsistent with typical industry practices. If a company's coding standard is very different from what is typically used in the industry, new recruits will take longer to get used to the company's coding style.
IDEs can help to enforce some parts of a coding standard e.g. indentation rules.
Exercises:
What is the recommended approach regarding coding standards?
Always code as if the person who ends up maintaining your code will be a violent psychopath who knows where you live. -- Martin Golding
Production code needs to be of high quality. Given how the world is becoming increasingly dependent on software, poor quality code is something no one can afford to tolerate.
+
What
Always code as if the person who ends up maintaining your code will be a violent psychopath who knows where you live. -- Martin Golding
Production code needs to be of high quality. Given how the world is becoming increasingly dependent on software, poor quality code is something no one can afford to tolerate.
Always code as if the person who ends up maintaining your code will be a violent psychopath who knows where you live. -- Martin Golding
Production code needs to be of high quality. Given how the world is becoming increasingly dependent on software, poor quality code is something no one can afford to tolerate.
+
Introduction
What
Can explain the importance of code quality
Always code as if the person who ends up maintaining your code will be a violent psychopath who knows where you live. -- Martin Golding
Production code needs to be of high quality. Given how the world is becoming increasingly dependent on software, poor quality code is something no one can afford to tolerate.
Avoid long methods as they often contain more information than what the reader can process at a time. Take corrective action when it goes beyond 30 . The bigger the haystack, the harder it is to find a needle.
+
Avoid long methods
Avoid long methods as they often contain more information than what the reader can process at a time. Take corrective action when it goes beyond 30 . The bigger the haystack, the harder it is to find a needle.
Make the code as explicit as possible, even if the language syntax allows them to be implicit. Here are some examples:
[Java] Use explicit type conversion instead of implicit type conversion.
[Java, Python] Use parentheses/braces to show groupings even when they can be skipped.
[Java, Python] Use enumerations when a certain variable can take only a small number of finite values. For example, instead of declaring the variable 'state' as an integer and using values 0, 1, 2 to denote the states 'starting', 'enabled', and 'disabled' respectively, declare 'state' as type SystemState and define an enumeration SystemState that has values 'STARTING', 'ENABLED', and 'DISABLED'.
+
Make the code obvious
Make the code as explicit as possible, even if the language syntax allows them to be implicit. Here are some examples:
[Java] Use explicit type conversion instead of implicit type conversion.
[Java, Python] Use parentheses/braces to show groupings even when they can be skipped.
[Java, Python] Use enumerations when a certain variable can take only a small number of finite values. For example, instead of declaring the variable 'state' as an integer and using values 0, 1, 2 to denote the states 'starting', 'enabled', and 'disabled' respectively, declare 'state' as type SystemState and define an enumeration SystemState that has values 'STARTING', 'ENABLED', and 'DISABLED'.
Optimizing code prematurely has several drawbacks:
You may not know which parts are the real performance bottlenecks. This is especially the case when the code undergoes transformations (e.g. compiling, minifying, transpiling, etc.) before it becomes an executable. Ideally, you should use a profiler tool to identify the actual bottlenecks of the code first, and optimize only those parts.
Optimizing can complicate the code, affecting correctness and readability.
Hand-optimized code can be harder for the compiler to optimize (the simpler the code, the easier it is for the compiler to optimize). In many cases, a compiler can do a better job of optimizing the runtime code if you don't get in the way by trying to hand-optimize the source code.
Make it work, make it right, make it fast is popular saying in the industry, which means in most cases, getting the code to perform correctly should take priority over optimizing it. If the code doesn't work correctly, it has no value no matter how fast/efficient it is.
Premature optimization is the root of all evil in programming. -- Donald Knuth
Of course, there are cases in which optimizing takes priority over other thingse.g. when writing code for resource-constrained environments. This guideline is simply a caution that you should optimize only when it is really needed.
+
Avoid premature optimizations
Optimizing code prematurely has several drawbacks:
You may not know which parts are the real performance bottlenecks. This is especially the case when the code undergoes transformations (e.g. compiling, minifying, transpiling, etc.) before it becomes an executable. Ideally, you should use a profiler tool to identify the actual bottlenecks of the code first, and optimize only those parts.
Optimizing can complicate the code, affecting correctness and readability.
Hand-optimized code can be harder for the compiler to optimize (the simpler the code, the easier it is for the compiler to optimize). In many cases, a compiler can do a better job of optimizing the runtime code if you don't get in the way by trying to hand-optimize the source code.
Make it work, make it right, make it fast is popular saying in the industry, which means in most cases, getting the code to perform correctly should take priority over optimizing it. If the code doesn't work correctly, it has no value no matter how fast/efficient it is.
Premature optimization is the root of all evil in programming. -- Donald Knuth
Of course, there are cases in which optimizing takes priority over other thingse.g. when writing code for resource-constrained environments. This guideline is simply a caution that you should optimize only when it is really needed.
Do not try to write ‘clever’ code. "Keep it simple, stupid” (KISS), as the old adage goes. For example, do not dismiss the brute-force yet simple solution in favor of a complicated one because of some ‘supposed benefits’ such as 'better reusability' unless you have a strong justification.
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian W. Kernighan
Programs must be written for people to read, and only incidentally for machines to execute. -- Abelson and Sussman
+
Practice KISSing
Do not try to write ‘clever’ code. "Keep it simple, stupid” (KISS), as the old adage goes. For example, do not dismiss the brute-force yet simple solution in favor of a complicated one because of some ‘supposed benefits’ such as 'better reusability' unless you have a strong justification.
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian W. Kernighan
Programs must be written for people to read, and only incidentally for machines to execute. -- Abelson and Sussman
Programs should be written and polished until they acquire publication quality. --Niklaus Wirth
Among various dimensions of code quality, such as run-time efficiency, security, and robustness, one of the most important is readability (aka understandability). This is because in any non-trivial software project, code needs to be read, understood, and modified by other developers later on. Even if you do not intend to pass the code to someone else, code quality is still important because you will become a 'stranger' to your own code someday.
+
Introduction
Programs should be written and polished until they acquire publication quality. --Niklaus Wirth
Among various dimensions of code quality, such as run-time efficiency, security, and robustness, one of the most important is readability (aka understandability). This is because in any non-trivial software project, code needs to be read, understood, and modified by other developers later on. Even if you do not intend to pass the code to someone else, code quality is still important because you will become a 'stranger' to your own code someday.
Use correct spelling in names. Avoid 'texting-style' spelling. Avoid foreign language words, slang, and names that are only meaningful within specific contexts/timese.g. terms from private jokes, a TV show currently popular in your country.
+
Use standard words
Use correct spelling in names. Avoid 'texting-style' spelling. Avoid foreign language words, slang, and names that are only meaningful within specific contexts/timese.g. terms from private jokes, a TV show currently popular in your country.
Related things should be named similarly, while unrelated things should NOT.
Example: Consider these variables
colorBlack: hex value for color black
colorWhite: hex value for color white
colorBlue: number of times blue is used
hexForRed: hex value for color red
This is misleading because colorBlue is named similar to colorWhite and colorBlack but has a different purpose while hexForRed is named differently but has a very similar purpose to the first two variables. The following is better:
hexForBlackhexForWhitehexForRed
blueColorCount
Avoid misleading or ambiguous names (e.g. those with multiple meanings), similar sounding names, hard-to-pronounce ones (e.g. avoid ambiguities like "is that a lowercase L, capital I or number 1?", or "is that number 0 or letter O?"), almost similar names.
Bad
Good
Reason
phase0
phaseZero
Is that zero or letter O?
rwrLgtDirn
rowerLegitDirection
Hard to pronounce
rightleftwrong
rightDirectionleftDirectionwrongResponse
right is for 'correct' or 'opposite of 'left'?
redBooksreadBooks
redColorBooksbooksRead
red and read (past tense) sounds the same
FiletMignon
egg
If the requirement is just a name of a food, egg is a much easier to type/say choice than FiletMignon
+
Avoid misleading names
Related things should be named similarly, while unrelated things should NOT.
Example: Consider these variables
colorBlack: hex value for color black
colorWhite: hex value for color white
colorBlue: number of times blue is used
hexForRed: hex value for color red
This is misleading because colorBlue is named similar to colorWhite and colorBlack but has a different purpose while hexForRed is named differently but has a very similar purpose to the first two variables. The following is better:
hexForBlackhexForWhitehexForRed
blueColorCount
Avoid misleading or ambiguous names (e.g. those with multiple meanings), similar sounding names, hard-to-pronounce ones (e.g. avoid ambiguities like "is that a lowercase L, capital I or number 1?", or "is that number 0 or letter O?"), almost similar names.
Bad
Good
Reason
phase0
phaseZero
Is that zero or letter O?
rwrLgtDirn
rowerLegitDirection
Hard to pronounce
rightleftwrong
rightDirectionleftDirectionwrongResponse
right is for 'correct' or 'opposite of 'left'?
redBooksreadBooks
redColorBooksbooksRead
red and read (past tense) sounds the same
FiletMignon
egg
If the requirement is just a name of a food, egg is a much easier to type/say choice than FiletMignon
Can improve code quality using technique: use name to explain
A name is not just for differentiation; it should explain the named entity to the reader accurately and at a sufficient level of detail.
Bad
Good
processInput() (what 'process'?)
removeWhiteSpaceFromInput()
flag
isValidInput
temp
If a name has multiple words, they should be in a sensible order.
Bad
Good
bySizeOrder()
orderBySize()
Imagine going to the doctor's and saying "My eye1 is swollen"! Don’t use numbers or case to distinguish names.
Bad
Bad
Good
value1, value2
value, Value
originalValue, finalValue
Not too long, not too short
Can improve code quality using technique: not too long, not too short
While it is preferable not to have lengthy names, names that are 'too short' are even worse. If you must abbreviate or use acronyms, do it consistently. Explain their full meaning at an obvious location.
Avoid misleading names
Can improve code quality using technique: avoid misleading names
Related things should be named similarly, while unrelated things should NOT.
Example: Consider these variables
colorBlack: hex value for color black
colorWhite: hex value for color white
colorBlue: number of times blue is used
hexForRed: hex value for color red
This is misleading because colorBlue is named similar to colorWhite and colorBlack but has a different purpose while hexForRed is named differently but has a very similar purpose to the first two variables. The following is better:
hexForBlackhexForWhitehexForRed
blueColorCount
Avoid misleading or ambiguous names (e.g. those with multiple meanings), similar sounding names, hard-to-pronounce ones (e.g. avoid ambiguities like "is that a lowercase L, capital I or number 1?", or "is that number 0 or letter O?"), almost similar names.
Bad
Good
Reason
phase0
phaseZero
Is that zero or letter O?
rwrLgtDirn
rowerLegitDirection
Hard to pronounce
rightleftwrong
rightDirectionleftDirectionwrongResponse
right is for 'correct' or 'opposite of 'left'?
redBooksreadBooks
redColorBooksbooksRead
red and read (past tense) sounds the same
FiletMignon
egg
If the requirement is just a name of a food, egg is a much easier to type/say choice than FiletMignon
+
Intermediate
Use name to explain
Can improve code quality using technique: use name to explain
A name is not just for differentiation; it should explain the named entity to the reader accurately and at a sufficient level of detail.
Bad
Good
processInput() (what 'process'?)
removeWhiteSpaceFromInput()
flag
isValidInput
temp
If a name has multiple words, they should be in a sensible order.
Bad
Good
bySizeOrder()
orderBySize()
Imagine going to the doctor's and saying "My eye1 is swollen"! Don’t use numbers or case to distinguish names.
Bad
Bad
Good
value1, value2
value, Value
originalValue, finalValue
Not too long, not too short
Can improve code quality using technique: not too long, not too short
While it is preferable not to have lengthy names, names that are 'too short' are even worse. If you must abbreviate or use acronyms, do it consistently. Explain their full meaning at an obvious location.
Avoid misleading names
Can improve code quality using technique: avoid misleading names
Related things should be named similarly, while unrelated things should NOT.
Example: Consider these variables
colorBlack: hex value for color black
colorWhite: hex value for color white
colorBlue: number of times blue is used
hexForRed: hex value for color red
This is misleading because colorBlue is named similar to colorWhite and colorBlack but has a different purpose while hexForRed is named differently but has a very similar purpose to the first two variables. The following is better:
hexForBlackhexForWhitehexForRed
blueColorCount
Avoid misleading or ambiguous names (e.g. those with multiple meanings), similar sounding names, hard-to-pronounce ones (e.g. avoid ambiguities like "is that a lowercase L, capital I or number 1?", or "is that number 0 or letter O?"), almost similar names.
Bad
Good
Reason
phase0
phaseZero
Is that zero or letter O?
rwrLgtDirn
rowerLegitDirection
Hard to pronounce
rightleftwrong
rightDirectionleftDirectionwrongResponse
right is for 'correct' or 'opposite of 'left'?
redBooksreadBooks
redColorBooksbooksRead
red and read (past tense) sounds the same
FiletMignon
egg
If the requirement is just a name of a food, egg is a much easier to type/say choice than FiletMignon
While it is preferable not to have lengthy names, names that are 'too short' are even worse. If you must abbreviate or use acronyms, do it consistently. Explain their full meaning at an obvious location.
+
Not too long, not too short
While it is preferable not to have lengthy names, names that are 'too short' are even worse. If you must abbreviate or use acronyms, do it consistently. Explain their full meaning at an obvious location.
This section uses extracts from the -- Java Tutorial, with some adaptations.
A collection — sometimes called a container — is simply an object that groups multiple elements into a single unit. Collections are used to store, retrieve, manipulate, and communicate aggregate data.
Typically, collections represent data items that form a natural group, such as a poker hand (a collection of cards), a mail folder (a collection of letters), or a telephone directory (a mapping of names to phone numbers).
The collections framework is a unified architecture for representing and manipulating collections. It contains the following:
Interfaces: These are abstract data types that represent collections. Interfaces allow collections to be manipulated independently of the details of their representation. Example: the List<E> interface can be used to manipulate list-like collections which may be implemented in different ways such as ArrayList<E> or LinkedList<E>.
Implementations: These are the concrete implementations of the collection interfaces. In essence, they are reusable data structures. Example: the ArrayList<E> class implements the List<E> interface while the HashMap<K, V> class implements the Map<K, V> interface.
Algorithms: These are the methods that perform useful computations, such as searching and sorting, on objects that implement collection interfaces. The algorithms are said to be polymorphic: that is, the same method can be used on many different implementations of the appropriate collection interface. Example: the sort(List<E>) method can sort a collection that implements the List<E> interface.
A well-known example of collections frameworks is the C++ Standard Template Library (STL). Although both are collections frameworks and the syntax look similar, note that there are important philosophical and implementation differences between the two.
The following list describes the core collection interfaces:
Collection — the root of the collection hierarchy. A collection represents a group of objects known as its elements. The Collection interface is the least common denominator that all collections implement and is used to pass collections around and to manipulate them when maximum generality is desired. Some types of collections allow duplicate elements, and others do not. Some are ordered and others are unordered. The Java platform doesn't provide any direct implementations of this interface but provides implementations of more specific subinterfaces, such as Set and List. Also see the Collection API.
Set — a collection that cannot contain duplicate elements. This interface models the mathematical set abstraction and is used to represent sets, such as the cards comprising a poker hand, the courses making up a student's schedule, or the processes running on a machine. Also see the Set API.
List — an ordered collection (sometimes called a sequence). Lists can contain duplicate elements. The user of a List generally has precise control over where in the list each element is inserted and can access elements by their integer index (position). Also see the List API.
Queue — a collection used to hold multiple elements prior to processing. Besides basic Collection operations, a Queue provides additional insertion, extraction, and inspection operations. Also see the Queue API.
Map — an object that maps keys to values. A Map cannot contain duplicate keys; each key can map to at most one value. Also see the Map API.
Others: Deque, SortedSet, SortedMap
+
The collections framework
This section uses extracts from the -- Java Tutorial, with some adaptations.
A collection — sometimes called a container — is simply an object that groups multiple elements into a single unit. Collections are used to store, retrieve, manipulate, and communicate aggregate data.
Typically, collections represent data items that form a natural group, such as a poker hand (a collection of cards), a mail folder (a collection of letters), or a telephone directory (a mapping of names to phone numbers).
The collections framework is a unified architecture for representing and manipulating collections. It contains the following:
Interfaces: These are abstract data types that represent collections. Interfaces allow collections to be manipulated independently of the details of their representation. Example: the List<E> interface can be used to manipulate list-like collections which may be implemented in different ways such as ArrayList<E> or LinkedList<E>.
Implementations: These are the concrete implementations of the collection interfaces. In essence, they are reusable data structures. Example: the ArrayList<E> class implements the List<E> interface while the HashMap<K, V> class implements the Map<K, V> interface.
Algorithms: These are the methods that perform useful computations, such as searching and sorting, on objects that implement collection interfaces. The algorithms are said to be polymorphic: that is, the same method can be used on many different implementations of the appropriate collection interface. Example: the sort(List<E>) method can sort a collection that implements the List<E> interface.
A well-known example of collections frameworks is the C++ Standard Template Library (STL). Although both are collections frameworks and the syntax look similar, note that there are important philosophical and implementation differences between the two.
The following list describes the core collection interfaces:
Collection — the root of the collection hierarchy. A collection represents a group of objects known as its elements. The Collection interface is the least common denominator that all collections implement and is used to pass collections around and to manipulate them when maximum generality is desired. Some types of collections allow duplicate elements, and others do not. Some are ordered and others are unordered. The Java platform doesn't provide any direct implementations of this interface but provides implementations of more specific subinterfaces, such as Set and List. Also see the Collection API.
Set — a collection that cannot contain duplicate elements. This interface models the mathematical set abstraction and is used to represent sets, such as the cards comprising a poker hand, the courses making up a student's schedule, or the processes running on a machine. Also see the Set API.
List — an ordered collection (sometimes called a sequence). Lists can contain duplicate elements. The user of a List generally has precise control over where in the list each element is inserted and can access elements by their integer index (position). Also see the List API.
Queue — a collection used to hold multiple elements prior to processing. Besides basic Collection operations, a Queue provides additional insertion, extraction, and inspection operations. Also see the Queue API.
Map — an object that maps keys to values. A Map cannot contain duplicate keys; each key can map to at most one value. Also see the Map API.
Java has a number of primitive data types, as given below:
byte: an integer in the range -128 to 127 (inclusive).
short: an integer in the range -32,768 to 32,767 (inclusive).
int: an integer in the range -231 to 231-1.
long: An integer in the range -263 to 263-1.
float: a single-precision 32-bit IEEE 754 floating point. This data type should never be used for precise values, such as currency. For that, you will need to use the java.math.BigDecimal class instead.
double: a double-precision 64-bit IEEE 754 floating point. For decimal values, this data type is generally the default choice. This data type should never be used for precise values, such as currency.
boolean: has only two possible values: true and false.
char: The char data type is a single 16-bit Unicode character. It has a minimum value of '\u0000' (or 0) and a maximum value of '\uffff' (or 65,535 inclusive).
The String type (a peek)
Java has a built-in type called String to represent strings. While String is not a primitive type, strings are used often.String values are demarcated by enclosing in a pair of double quotes (e.g., "Hello"). You can use the + operator to concatenate strings (e.g., "Hello " + "!").
You’ll learn more about strings in a later section.
Java has a number of primitive data types, as given below:
byte: an integer in the range -128 to 127 (inclusive).
short: an integer in the range -32,768 to 32,767 (inclusive).
int: an integer in the range -231 to 231-1.
long: An integer in the range -263 to 263-1.
float: a single-precision 32-bit IEEE 754 floating point. This data type should never be used for precise values, such as currency. For that, you will need to use the java.math.BigDecimal class instead.
double: a double-precision 64-bit IEEE 754 floating point. For decimal values, this data type is generally the default choice. This data type should never be used for precise values, such as currency.
boolean: has only two possible values: true and false.
char: The char data type is a single 16-bit Unicode character. It has a minimum value of '\u0000' (or 0) and a maximum value of '\uffff' (or 65,535 inclusive).
The String type (a peek)
Java has a built-in type called String to represent strings. While String is not a primitive type, strings are used often.String values are demarcated by enclosing in a pair of double quotes (e.g., "Hello"). You can use the + operator to concatenate strings (e.g., "Hello " + "!").
You’ll learn more about strings in a later section.
Given below is an extract from the -- Java Tutorial, with some adaptations.
There are three basic categories of exceptions In Java:
Checked exceptions: exceptional conditions that a well-written application should anticipate and recover from. All exceptions are checked exceptions, except for Error, RuntimeException, and their subclasses.
Suppose an application prompts a user for an input file name, then opens the file by passing the name to the constructor for java.io.FileReader. Normally, the user provides the name of an existing, readable file, so the construction of the FileReader object succeeds, and the execution of the application proceeds normally. But sometimes the user supplies the name of a nonexistent file, and the constructor throws java.io.FileNotFoundException. A well-written program will catch this exception and notify the user of the mistake, possibly prompting for a corrected file name.
Errors: exceptional conditions that are external to the application, and that the application usually cannot anticipate or recover from. Errors are those exceptions indicated by Error and its subclasses.
Suppose that an application successfully opens a file for input, but is unable to read the file because of a hardware or system malfunction. The unsuccessful read will throw java.io.IOError. An application might choose to catch this exception, in order to notify the user of the problem — but it also might make sense for the program to print a stack trace and exit.
Runtime exceptions: conditions that are internal to the application, and that the application usually cannot anticipate or recover from. Runtime exceptions are those indicated by RuntimeException and its subclasses. These usually indicate programming bugs, such as logic errors or improper use of an API.
Consider the application described previously that passes a file name to the constructor for FileReader. If a logic error causes a null to be passed to the constructor, the constructor will throw NullPointerException. The application can catch this exception, but it probably makes more sense to eliminate the bug that caused the exception to occur.
Errors and runtime exceptions are collectively known as unchecked exceptions.
+
What are Exceptions?
Given below is an extract from the -- Java Tutorial, with some adaptations.
There are three basic categories of exceptions In Java:
Checked exceptions: exceptional conditions that a well-written application should anticipate and recover from. All exceptions are checked exceptions, except for Error, RuntimeException, and their subclasses.
Suppose an application prompts a user for an input file name, then opens the file by passing the name to the constructor for java.io.FileReader. Normally, the user provides the name of an existing, readable file, so the construction of the FileReader object succeeds, and the execution of the application proceeds normally. But sometimes the user supplies the name of a nonexistent file, and the constructor throws java.io.FileNotFoundException. A well-written program will catch this exception and notify the user of the mistake, possibly prompting for a corrected file name.
Errors: exceptional conditions that are external to the application, and that the application usually cannot anticipate or recover from. Errors are those exceptions indicated by Error and its subclasses.
Suppose that an application successfully opens a file for input, but is unable to read the file because of a hardware or system malfunction. The unsuccessful read will throw java.io.IOError. An application might choose to catch this exception, in order to notify the user of the problem — but it also might make sense for the program to print a stack trace and exit.
Runtime exceptions: conditions that are internal to the application, and that the application usually cannot anticipate or recover from. Runtime exceptions are those indicated by RuntimeException and its subclasses. These usually indicate programming bugs, such as logic errors or improper use of an API.
Consider the application described previously that passes a file name to the constructor for FileReader. If a logic error causes a null to be passed to the constructor, the constructor will throw NullPointerException. The application can catch this exception, but it probably makes more sense to eliminate the bug that caused the exception to occur.
Errors and runtime exceptions are collectively known as unchecked exceptions.
To compile the HelloWorld program, open a command console, navigate to the folder containing the file, and run the following command.
>_javac HelloWorld.java
If the compilation is successful, you should see a file HelloWorld.class. That file contains the byte code for your program. If the compilation is unsuccessful, you will be notified of the compile-time errors.
Notes:
javac is the java compiler that you get when you install the JDK.
For the above command to work, your console program should be able to find the javac executable (e.g., In Windows, the location of the javac.exe should be in the PATH system variable). This page shows how to set PATH in different OS'es.
+
Compiling a program
To compile the HelloWorld program, open a command console, navigate to the folder containing the file, and run the following command.
>_javac HelloWorld.java
If the compilation is successful, you should see a file HelloWorld.class. That file contains the byte code for your program. If the compilation is unsuccessful, you will be notified of the compile-time errors.
Notes:
javac is the java compiler that you get when you install the JDK.
For the above command to work, your console program should be able to find the javac executable (e.g., In Windows, the location of the javac.exe should be in the PATH system variable). This page shows how to set PATH in different OS'es.
To run Java programs, you only need to have a recent version of the Java Runtime Environment (JRE) installed in your device.
If you want to develop applications for Java, download and install a recent version of the Java Development Kit (JDK), which includes the JRE as well as additional resources needed to develop Java applications.
+
Installation
To run Java programs, you only need to have a recent version of the Java Runtime Environment (JRE) installed in your device.
If you want to develop applications for Java, download and install a recent version of the Java Development Kit (JDK), which includes the JRE as well as additional resources needed to develop Java applications.
Java is both and . Instead of translating programs directly into machine language, the Java compiler generates byte code. Byte code is portable, so it is possible to compile a Java program on one machine, transfer the byte code to another machine, and run the byte code on the other machine. That’s why Java is considered a platform independent technology, aka WORA (Write Once Run Anywhere). The interpreter that runs byte code is called a “Java Virtual Machine” (JVM).
Java technology is both a programming language and a platform. The Java programming language is a high-level object-oriented language that has a particular syntax and style. A Java platform is a particular environment in which Java programming language applications run. --Oracle
+
How Java works
Java is both and . Instead of translating programs directly into machine language, the Java compiler generates byte code. Byte code is portable, so it is possible to compile a Java program on one machine, transfer the byte code to another machine, and run the byte code on the other machine. That’s why Java is considered a platform independent technology, aka WORA (Write Once Run Anywhere). The interpreter that runs byte code is called a “Java Virtual Machine” (JVM).
Java technology is both a programming language and a platform. The Java programming language is a high-level object-oriented language that has a particular syntax and style. A Java platform is a particular environment in which Java programming language applications run. --Oracle
Given below are some noteworthy JUnit concepts, as per the JUnit 5 User Guide.
Annotations: In addition to the @Test annotation you've seen already, there are many other annotations in JUnit. For example, the @Disabled annotation can be used to disable a test temporarily. [more ...]
Pre/post-test tasks: In order to allow individual test methods to be executed in isolation and to avoid unexpected side effects due to mutable test instance state, JUnit creates a new instance of each test class before executing each test method. It is possible to supply code that should be run before/after every test method/class (e.g., for setting up the environment required by the tests, or cleaning up things after a test is completed) by using test instance lifecycle annotations such as @BeforeEach@AfterAll. [more ...]
Conditional test execution: It is possible to configure tests to run only under certain conditions. For example, @TestOnMac annotation can be used to specify tests that should run on Mac OS only. [more ...]
Assumptions: It is possible to specify assumptions that must hold for a test to be executed (i.e., the test will be skipped if the assumption does not hold). [more ...]
Tagging tests: It is possible to tag tests (e.g., @Tag("slow") so that tests can be selected based on tags. [more ...]
Test execution order: By default, JUnit executes test classes and methods in a deterministic but intentionally nonobvious order. This ensures that subsequent runs of a test suite execute tests in the same order, thereby allowing for repeatable builds. But it is possible to specify a specific testing order. [more ...]
Test hierarchies: Normally, we organize tests into separate test classes. If a more hierarchical structure is needed, the @Nested annotation can be used to express the relationship among groups of tests. [more ...]
Repeated tests: JUnit provides the ability to repeat a test a specified number of times by annotating a method with @RepeatedTest and specifying the total number of repetitions desired. [more ...]
Parameterized tests make it possible to run a test multiple times with different arguments. The parameter values can be supplied using a variety of ways e.g., an array of values, enums, a csv file, etc. [more ...]
Dynamic tests: The @TestFactory annotation can be used to specify factory methods that generate tests dynamically. [more ...]
Timeouts: The @Timeout annotation allows one to declare that a test should fail if its execution time exceeds a given duration. [more ...]
Parallel execution: By default, JUnit tests are run sequentially in a single thread. Running tests in parallel — for example, to speed up execution — is available as an opt-in feature. [more ...]
Extensions: JUnit supports third-party extensions. The built-in TempDirectory extension is used to create and clean up a temporary directory for an individual test or all tests in a test class. [more ...]
+
JUnit: Intermediate
Given below are some noteworthy JUnit concepts, as per the JUnit 5 User Guide.
Annotations: In addition to the @Test annotation you've seen already, there are many other annotations in JUnit. For example, the @Disabled annotation can be used to disable a test temporarily. [more ...]
Pre/post-test tasks: In order to allow individual test methods to be executed in isolation and to avoid unexpected side effects due to mutable test instance state, JUnit creates a new instance of each test class before executing each test method. It is possible to supply code that should be run before/after every test method/class (e.g., for setting up the environment required by the tests, or cleaning up things after a test is completed) by using test instance lifecycle annotations such as @BeforeEach@AfterAll. [more ...]
Conditional test execution: It is possible to configure tests to run only under certain conditions. For example, @TestOnMac annotation can be used to specify tests that should run on Mac OS only. [more ...]
Assumptions: It is possible to specify assumptions that must hold for a test to be executed (i.e., the test will be skipped if the assumption does not hold). [more ...]
Tagging tests: It is possible to tag tests (e.g., @Tag("slow") so that tests can be selected based on tags. [more ...]
Test execution order: By default, JUnit executes test classes and methods in a deterministic but intentionally nonobvious order. This ensures that subsequent runs of a test suite execute tests in the same order, thereby allowing for repeatable builds. But it is possible to specify a specific testing order. [more ...]
Test hierarchies: Normally, we organize tests into separate test classes. If a more hierarchical structure is needed, the @Nested annotation can be used to express the relationship among groups of tests. [more ...]
Repeated tests: JUnit provides the ability to repeat a test a specified number of times by annotating a method with @RepeatedTest and specifying the total number of repetitions desired. [more ...]
Parameterized tests make it possible to run a test multiple times with different arguments. The parameter values can be supplied using a variety of ways e.g., an array of values, enums, a csv file, etc. [more ...]
Dynamic tests: The @TestFactory annotation can be used to specify factory methods that generate tests dynamically. [more ...]
Timeouts: The @Timeout annotation allows one to declare that a test should fail if its execution time exceeds a given duration. [more ...]
Parallel execution: By default, JUnit tests are run sequentially in a single thread. Running tests in parallel — for example, to speed up execution — is available as an opt-in feature. [more ...]
Extensions: JUnit supports third-party extensions. The built-in TempDirectory extension is used to create and clean up a temporary directory for an individual test or all tests in a test class. [more ...]
Access level modifiers determine whether other classes can use a particular field or invoke a particular method.
There are two levels of access control:
At the class level:
public: the class is visible to all classes everywhere
no modifier (the default, also known as package-private): it is visible only within its own package
At the member level:
public or no modifier (package-private): same meaning as when used with top-level classes
private: the member can only be accessed in its own class
protected: the member can only be accessed within its own package (as with package-private) and, in addition, by a subclass of its class in another package
The following table shows the access to members permitted by each modifier.
Modifier
public
protected
no modifier
private
Access levels affect you in two ways:
When you use classes that come from another source, such as the classes in the Java platform, access levels determine which members of those classes your own classes can use.
When you write a class, you need to decide what access level every member variable and every method in your class should have.
+
Access modifiers
Access level modifiers determine whether other classes can use a particular field or invoke a particular method.
There are two levels of access control:
At the class level:
public: the class is visible to all classes everywhere
no modifier (the default, also known as package-private): it is visible only within its own package
At the member level:
public or no modifier (package-private): same meaning as when used with top-level classes
private: the member can only be accessed in its own class
protected: the member can only be accessed within its own package (as with package-private) and, in addition, by a subclass of its class in another package
The following table shows the access to members permitted by each modifier.
Modifier
public
protected
no modifier
private
Access levels affect you in two ways:
When you use classes that come from another source, such as the classes in the Java platform, access levels determine which members of those classes your own classes can use.
When you write a class, you need to decide what access level every member variable and every method in your class should have.
JavaFX is a technology for building Java-based GUIs. Previously it was a part Java itself, but has become a third-party dependency since then. It is now being maintained by OpenJDK.
JavaFX is a technology for building Java-based GUIs. Previously it was a part Java itself, but has become a third-party dependency since then. It is now being maintained by OpenJDK.
Java comes with a rich collection of classes that you can use. They form what is known as the Java API (Application Programming Interface). Each class in the API comes with documentation in a standard format.
Java comes with a rich collection of classes that you can use. They form what is known as the Java API (Application Programming Interface). Each class in the API comes with documentation in a standard format.
Design is the creative process of transforming the problem into a solution; the solution is also called design. -- 📖 Software Engineering Theory and Practice, Shari Lawrence; Atlee, Joanne M. Pfleeger
Software design has two main aspects:
Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.
Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.
Design is the creative process of transforming the problem into a solution; the solution is also called design. -- 📖 Software Engineering Theory and Practice, Shari Lawrence; Atlee, Joanne M. Pfleeger
Software design has two main aspects:
Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.
Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.
Design is the creative process of transforming the problem into a solution; the solution is also called design. -- 📖 Software Engineering Theory and Practice, Shari Lawrence; Atlee, Joanne M. Pfleeger
Software design has two main aspects:
Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.
Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.
+
Introduction
What
Can explain what is software design
Design is the creative process of transforming the problem into a solution; the solution is also called design. -- 📖 Software Engineering Theory and Practice, Shari Lawrence; Atlee, Joanne M. Pfleeger
Software design has two main aspects:
Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.
Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.
Design is the creative process of transforming the problem into a solution; the solution is also called design. -- 📖 Software Engineering Theory and Practice, Shari Lawrence; Atlee, Joanne M. Pfleeger
Software design has two main aspects:
Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.
Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.
+
What
Design is the creative process of transforming the problem into a solution; the solution is also called design. -- 📖 Software Engineering Theory and Practice, Shari Lawrence; Atlee, Joanne M. Pfleeger
Software design has two main aspects:
Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.
Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.
Agile design can be contrasted with full upfront design in the following way:
Agile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. -- adapted from agilemodeling.com
Exercises:
Statement about agile design
+
Agile design
Agile design
Can explain agile design
Agile design can be contrasted with full upfront design in the following way:
Agile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. -- adapted from agilemodeling.com
Agile design can be contrasted with full upfront design in the following way:
Agile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. -- adapted from agilemodeling.com
Exercises:
Statement about agile design
+
Agile design
Agile design can be contrasted with full upfront design in the following way:
Agile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. -- adapted from agilemodeling.com
The design of bigger systems needs to be done/shown at multiple levels.
This architecture diagram of se-edu/addressbook-level3 depicts the high-level design of the software.
Here are examples of lower level designs of some components of the same software:
UI
Logic
Storage
Top-down and bottom-up design
Top-down and bottom-up design
Can explain top-down and bottom-up design
Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.
Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.
Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.
Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.
Exercises:
Which is better?
Agile design
Agile design
Can explain agile design
Agile design can be contrasted with full upfront design in the following way:
Agile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. -- adapted from agilemodeling.com
The design of bigger systems needs to be done/shown at multiple levels.
This architecture diagram of se-edu/addressbook-level3 depicts the high-level design of the software.
Here are examples of lower level designs of some components of the same software:
UI
Logic
Storage
Top-down and bottom-up design
Top-down and bottom-up design
Can explain top-down and bottom-up design
Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.
Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.
Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.
Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.
Exercises:
Which is better?
Agile design
Agile design
Can explain agile design
Agile design can be contrasted with full upfront design in the following way:
Agile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. -- adapted from agilemodeling.com
Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.
Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.
Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.
Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.
Exercises:
Which is better?
+
Top-down and bottom-up design
Top-down and bottom-up design
Can explain top-down and bottom-up design
Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.
Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.
Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.
Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.
Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.
Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.
Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.
Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.
Exercises:
Which is better?
+
Top-down and bottom-up design
Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.
Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.
Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.
Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.
Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.
The guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.
Data abstraction: abstracting away the lower level data items and thinking in terms of bigger entities
Within a certain software component, you might deal with a user data type, while ignoring the details contained in the user data item such as name, and date of birth. These details have been ‘abstracted away’ as they do not affect the task of that software component.
Control abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level
print(“Hello”) is an abstraction of the actual output mechanism within the computer.
Abstraction can be applied repeatedly to obtain progressively higher levels of abstraction.
An example of different levels of data abstraction: a File is a data item that is at a higher level than an array and an array is at a higher level than a bit.
An example of different levels of control abstraction: execute(Game) is at a higher level than print(Char) which is at a higher level than an Assembly language instruction MOV.
Abstraction is a general concept that is not limited to just data or control abstractions.
Some more general examples of abstraction:
An OOP class is an abstraction over related data and behaviors.
An architecture is a higher-level abstraction of the design of a software.
Models (e.g., UML models) are abstractions of some aspect of reality.
+
Abstraction
What
Can explain abstraction
Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.
The guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.
Data abstraction: abstracting away the lower level data items and thinking in terms of bigger entities
Within a certain software component, you might deal with a user data type, while ignoring the details contained in the user data item such as name, and date of birth. These details have been ‘abstracted away’ as they do not affect the task of that software component.
Control abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level
print(“Hello”) is an abstraction of the actual output mechanism within the computer.
Abstraction can be applied repeatedly to obtain progressively higher levels of abstraction.
An example of different levels of data abstraction: a File is a data item that is at a higher level than an array and an array is at a higher level than a bit.
An example of different levels of control abstraction: execute(Game) is at a higher level than print(Char) which is at a higher level than an Assembly language instruction MOV.
Abstraction is a general concept that is not limited to just data or control abstractions.
Some more general examples of abstraction:
An OOP class is an abstraction over related data and behaviors.
An architecture is a higher-level abstraction of the design of a software.
Models (e.g., UML models) are abstractions of some aspect of reality.
Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.
The guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.
Data abstraction: abstracting away the lower level data items and thinking in terms of bigger entities
Within a certain software component, you might deal with a user data type, while ignoring the details contained in the user data item such as name, and date of birth. These details have been ‘abstracted away’ as they do not affect the task of that software component.
Control abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level
print(“Hello”) is an abstraction of the actual output mechanism within the computer.
Abstraction can be applied repeatedly to obtain progressively higher levels of abstraction.
An example of different levels of data abstraction: a File is a data item that is at a higher level than an array and an array is at a higher level than a bit.
An example of different levels of control abstraction: execute(Game) is at a higher level than print(Char) which is at a higher level than an Assembly language instruction MOV.
Abstraction is a general concept that is not limited to just data or control abstractions.
Some more general examples of abstraction:
An OOP class is an abstraction over related data and behaviors.
An architecture is a higher-level abstraction of the design of a software.
Models (e.g., UML models) are abstractions of some aspect of reality.
+
What
Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.
The guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.
Data abstraction: abstracting away the lower level data items and thinking in terms of bigger entities
Within a certain software component, you might deal with a user data type, while ignoring the details contained in the user data item such as name, and date of birth. These details have been ‘abstracted away’ as they do not affect the task of that software component.
Control abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level
print(“Hello”) is an abstraction of the actual output mechanism within the computer.
Abstraction can be applied repeatedly to obtain progressively higher levels of abstraction.
An example of different levels of data abstraction: a File is a data item that is at a higher level than an array and an array is at a higher level than a bit.
An example of different levels of control abstraction: execute(Game) is at a higher level than print(Char) which is at a higher level than an Assembly language instruction MOV.
Abstraction is a general concept that is not limited to just data or control abstractions.
Some more general examples of abstraction:
An OOP class is an abstraction over related data and behaviors.
An architecture is a higher-level abstraction of the design of a software.
Models (e.g., UML models) are abstractions of some aspect of reality.
Cohesion can be present in many forms. Some examples:
Code related to a single concept is kept together, e.g. the Student component handles everything related to students.
Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.
Code that manipulates the same data structure is kept together, e.g. the GameArchive component handles everything related to the storage and retrieval of game sessions.
Suppose a Payroll application contains a class that deals with writing data to the database. If the class includes some code to show an error dialog to the user if the database is unreachable, that class is not cohesive because it seems to be interacting with the user as well as the database.
Exercises:
Which class is more cohesive?
+
How
Cohesion can be present in many forms. Some examples:
Code related to a single concept is kept together, e.g. the Student component handles everything related to students.
Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.
Code that manipulates the same data structure is kept together, e.g. the GameArchive component handles everything related to the storage and retrieval of game sessions.
Suppose a Payroll application contains a class that deals with writing data to the database. If the class includes some code to show an error dialog to the user if the database is unreachable, that class is not cohesive because it seems to be interacting with the user as well as the database.
Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.
Higher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):
Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.
Lowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).
Lowers reusability of modules because they do not represent logical units of functionality.
How
Can increase cohesion
Cohesion can be present in many forms. Some examples:
Code related to a single concept is kept together, e.g. the Student component handles everything related to students.
Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.
Code that manipulates the same data structure is kept together, e.g. the GameArchive component handles everything related to the storage and retrieval of game sessions.
Suppose a Payroll application contains a class that deals with writing data to the database. If the class includes some code to show an error dialog to the user if the database is unreachable, that class is not cohesive because it seems to be interacting with the user as well as the database.
Exercises:
Which class is more cohesive?
+
Cohesion
What
Can explain cohesion
Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.
Higher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):
Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.
Lowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).
Lowers reusability of modules because they do not represent logical units of functionality.
How
Can increase cohesion
Cohesion can be present in many forms. Some examples:
Code related to a single concept is kept together, e.g. the Student component handles everything related to students.
Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.
Code that manipulates the same data structure is kept together, e.g. the GameArchive component handles everything related to the storage and retrieval of game sessions.
Suppose a Payroll application contains a class that deals with writing data to the database. If the class includes some code to show an error dialog to the user if the database is unreachable, that class is not cohesive because it seems to be interacting with the user as well as the database.
Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.
Higher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):
Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.
Lowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).
Lowers reusability of modules because they do not represent logical units of functionality.
+
What
Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.
Higher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):
Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.
Lowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).
Lowers reusability of modules because they do not represent logical units of functionality.
X is coupled to Y if a change to Y can potentially require a change in X.
If the Foo class calls the method Bar#read(), Foo is coupled to Bar because a change to Bar can potentially (but not always) require a change in the Foo class e.g. if the signature of Bar#read() is changed, Foo needs to change as well, but a change to the Bar#write() method may not require a change in the Foo class because Foo does not call Bar#write().
code for the above example
Some examples of coupling: A is coupled to B if,
A has access to the internal structure of B (this results in a very high level of coupling)
A and B depend on the same global variable
A calls B
A receives an object of B as a parameter or a return value
A inherits from B
A and B are required to follow the same data format or communication protocol
Exercises:
Which indicate coupling?
+
How
X is coupled to Y if a change to Y can potentially require a change in X.
If the Foo class calls the method Bar#read(), Foo is coupled to Bar because a change to Bar can potentially (but not always) require a change in the Foo class e.g. if the signature of Bar#read() is changed, Foo needs to change as well, but a change to the Bar#write() method may not require a change in the Foo class because Foo does not call Bar#write().
code for the above example
Some examples of coupling: A is coupled to B if,
A has access to the internal structure of B (this results in a very high level of coupling)
A and B depend on the same global variable
A calls B
A receives an object of B as a parameter or a return value
A inherits from B
A and B are required to follow the same data format or communication protocol
Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:
Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).
Integration is harder because multiple components coupled with each other have to be integrated at the same time.
Testing and reuse of the module is harder due to its dependence on other modules.
In the example below, design A appears to have more coupling between the components than design B.
Exercises:
Coupling levels of alternative designs
Regressions and coupling
Coupling and testability
Statements about coupling
How
Can reduce coupling
X is coupled to Y if a change to Y can potentially require a change in X.
If the Foo class calls the method Bar#read(), Foo is coupled to Bar because a change to Bar can potentially (but not always) require a change in the Foo class e.g. if the signature of Bar#read() is changed, Foo needs to change as well, but a change to the Bar#write() method may not require a change in the Foo class because Foo does not call Bar#write().
code for the above example
Some examples of coupling: A is coupled to B if,
A has access to the internal structure of B (this results in a very high level of coupling)
A and B depend on the same global variable
A calls B
A receives an object of B as a parameter or a return value
A inherits from B
A and B are required to follow the same data format or communication protocol
Exercises:
Which indicate coupling?
Types of coupling
Can identify types of coupling
Some examples of different coupling types:
Content coupling: one module modifies or relies on the internal workings of another module e.g., accessing local data of another module
Common/Global coupling: two modules share the same global data
Control coupling: one module controlling the flow of another, by passing it information on what to do e.g., passing a flag
Data coupling: one module sharing data with another module e.g. via passing parameters
External coupling: two modules share an externally imposed convention e.g., data formats, communication protocols, device interfaces.
Subclass coupling: a class inherits from another class. Note that a child class is coupled to the parent class but not the other way around.
Temporal coupling: two actions are bundled together just because they happen to occur at the same time e.g. extracting a contiguous block of code as a method although the code block contains statements unrelated to each other
+
Coupling
What
Can explain coupling
Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:
Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).
Integration is harder because multiple components coupled with each other have to be integrated at the same time.
Testing and reuse of the module is harder due to its dependence on other modules.
In the example below, design A appears to have more coupling between the components than design B.
Exercises:
Coupling levels of alternative designs
Regressions and coupling
Coupling and testability
Statements about coupling
How
Can reduce coupling
X is coupled to Y if a change to Y can potentially require a change in X.
If the Foo class calls the method Bar#read(), Foo is coupled to Bar because a change to Bar can potentially (but not always) require a change in the Foo class e.g. if the signature of Bar#read() is changed, Foo needs to change as well, but a change to the Bar#write() method may not require a change in the Foo class because Foo does not call Bar#write().
code for the above example
Some examples of coupling: A is coupled to B if,
A has access to the internal structure of B (this results in a very high level of coupling)
A and B depend on the same global variable
A calls B
A receives an object of B as a parameter or a return value
A inherits from B
A and B are required to follow the same data format or communication protocol
Exercises:
Which indicate coupling?
Types of coupling
Can identify types of coupling
Some examples of different coupling types:
Content coupling: one module modifies or relies on the internal workings of another module e.g., accessing local data of another module
Common/Global coupling: two modules share the same global data
Control coupling: one module controlling the flow of another, by passing it information on what to do e.g., passing a flag
Data coupling: one module sharing data with another module e.g. via passing parameters
External coupling: two modules share an externally imposed convention e.g., data formats, communication protocols, device interfaces.
Subclass coupling: a class inherits from another class. Note that a child class is coupled to the parent class but not the other way around.
Temporal coupling: two actions are bundled together just because they happen to occur at the same time e.g. extracting a contiguous block of code as a method although the code block contains statements unrelated to each other
Content coupling: one module modifies or relies on the internal workings of another module e.g., accessing local data of another module
Common/Global coupling: two modules share the same global data
Control coupling: one module controlling the flow of another, by passing it information on what to do e.g., passing a flag
Data coupling: one module sharing data with another module e.g. via passing parameters
External coupling: two modules share an externally imposed convention e.g., data formats, communication protocols, device interfaces.
Subclass coupling: a class inherits from another class. Note that a child class is coupled to the parent class but not the other way around.
Temporal coupling: two actions are bundled together just because they happen to occur at the same time e.g. extracting a contiguous block of code as a method although the code block contains statements unrelated to each other
+
Types of coupling
Some examples of different coupling types:
Content coupling: one module modifies or relies on the internal workings of another module e.g., accessing local data of another module
Common/Global coupling: two modules share the same global data
Control coupling: one module controlling the flow of another, by passing it information on what to do e.g., passing a flag
Data coupling: one module sharing data with another module e.g. via passing parameters
External coupling: two modules share an externally imposed convention e.g., data formats, communication protocols, device interfaces.
Subclass coupling: a class inherits from another class. Note that a child class is coupled to the parent class but not the other way around.
Temporal coupling: two actions are bundled together just because they happen to occur at the same time e.g. extracting a contiguous block of code as a method although the code block contains statements unrelated to each other
Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:
Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).
Integration is harder because multiple components coupled with each other have to be integrated at the same time.
Testing and reuse of the module is harder due to its dependence on other modules.
In the example below, design A appears to have more coupling between the components than design B.
Exercises:
Coupling levels of alternative designs
Regressions and coupling
Coupling and testability
Statements about coupling
+
What
Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:
Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).
Integration is harder because multiple components coupled with each other have to be integrated at the same time.
Testing and reuse of the module is harder due to its dependence on other modules.
In the example below, design A appears to have more coupling between the components than design B.
Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.
The guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.
Data abstraction: abstracting away the lower level data items and thinking in terms of bigger entities
Within a certain software component, you might deal with a user data type, while ignoring the details contained in the user data item such as name, and date of birth. These details have been ‘abstracted away’ as they do not affect the task of that software component.
Control abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level
print(“Hello”) is an abstraction of the actual output mechanism within the computer.
Abstraction can be applied repeatedly to obtain progressively higher levels of abstraction.
An example of different levels of data abstraction: a File is a data item that is at a higher level than an array and an array is at a higher level than a bit.
An example of different levels of control abstraction: execute(Game) is at a higher level than print(Char) which is at a higher level than an Assembly language instruction MOV.
Abstraction is a general concept that is not limited to just data or control abstractions.
Some more general examples of abstraction:
An OOP class is an abstraction over related data and behaviors.
An architecture is a higher-level abstraction of the design of a software.
Models (e.g., UML models) are abstractions of some aspect of reality.
Coupling
What
Can explain coupling
Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:
Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).
Integration is harder because multiple components coupled with each other have to be integrated at the same time.
Testing and reuse of the module is harder due to its dependence on other modules.
In the example below, design A appears to have more coupling between the components than design B.
Exercises:
Coupling levels of alternative designs
Regressions and coupling
Coupling and testability
Statements about coupling
How
Can reduce coupling
X is coupled to Y if a change to Y can potentially require a change in X.
If the Foo class calls the method Bar#read(), Foo is coupled to Bar because a change to Bar can potentially (but not always) require a change in the Foo class e.g. if the signature of Bar#read() is changed, Foo needs to change as well, but a change to the Bar#write() method may not require a change in the Foo class because Foo does not call Bar#write().
code for the above example
Some examples of coupling: A is coupled to B if,
A has access to the internal structure of B (this results in a very high level of coupling)
A and B depend on the same global variable
A calls B
A receives an object of B as a parameter or a return value
A inherits from B
A and B are required to follow the same data format or communication protocol
Exercises:
Which indicate coupling?
Types of coupling
Can identify types of coupling
Some examples of different coupling types:
Content coupling: one module modifies or relies on the internal workings of another module e.g., accessing local data of another module
Common/Global coupling: two modules share the same global data
Control coupling: one module controlling the flow of another, by passing it information on what to do e.g., passing a flag
Data coupling: one module sharing data with another module e.g. via passing parameters
External coupling: two modules share an externally imposed convention e.g., data formats, communication protocols, device interfaces.
Subclass coupling: a class inherits from another class. Note that a child class is coupled to the parent class but not the other way around.
Temporal coupling: two actions are bundled together just because they happen to occur at the same time e.g. extracting a contiguous block of code as a method although the code block contains statements unrelated to each other
Cohesion
What
Can explain cohesion
Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.
Higher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):
Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.
Lowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).
Lowers reusability of modules because they do not represent logical units of functionality.
How
Can increase cohesion
Cohesion can be present in many forms. Some examples:
Code related to a single concept is kept together, e.g. the Student component handles everything related to students.
Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.
Code that manipulates the same data structure is kept together, e.g. the GameArchive component handles everything related to the storage and retrieval of game sessions.
Suppose a Payroll application contains a class that deals with writing data to the database. If the class includes some code to show an error dialog to the user if the database is unreachable, that class is not cohesive because it seems to be interacting with the user as well as the database.
Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.
The guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.
Data abstraction: abstracting away the lower level data items and thinking in terms of bigger entities
Within a certain software component, you might deal with a user data type, while ignoring the details contained in the user data item such as name, and date of birth. These details have been ‘abstracted away’ as they do not affect the task of that software component.
Control abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level
print(“Hello”) is an abstraction of the actual output mechanism within the computer.
Abstraction can be applied repeatedly to obtain progressively higher levels of abstraction.
An example of different levels of data abstraction: a File is a data item that is at a higher level than an array and an array is at a higher level than a bit.
An example of different levels of control abstraction: execute(Game) is at a higher level than print(Char) which is at a higher level than an Assembly language instruction MOV.
Abstraction is a general concept that is not limited to just data or control abstractions.
Some more general examples of abstraction:
An OOP class is an abstraction over related data and behaviors.
An architecture is a higher-level abstraction of the design of a software.
Models (e.g., UML models) are abstractions of some aspect of reality.
Coupling
What
Can explain coupling
Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:
Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).
Integration is harder because multiple components coupled with each other have to be integrated at the same time.
Testing and reuse of the module is harder due to its dependence on other modules.
In the example below, design A appears to have more coupling between the components than design B.
Exercises:
Coupling levels of alternative designs
Regressions and coupling
Coupling and testability
Statements about coupling
How
Can reduce coupling
X is coupled to Y if a change to Y can potentially require a change in X.
If the Foo class calls the method Bar#read(), Foo is coupled to Bar because a change to Bar can potentially (but not always) require a change in the Foo class e.g. if the signature of Bar#read() is changed, Foo needs to change as well, but a change to the Bar#write() method may not require a change in the Foo class because Foo does not call Bar#write().
code for the above example
Some examples of coupling: A is coupled to B if,
A has access to the internal structure of B (this results in a very high level of coupling)
A and B depend on the same global variable
A calls B
A receives an object of B as a parameter or a return value
A inherits from B
A and B are required to follow the same data format or communication protocol
Exercises:
Which indicate coupling?
Types of coupling
Can identify types of coupling
Some examples of different coupling types:
Content coupling: one module modifies or relies on the internal workings of another module e.g., accessing local data of another module
Common/Global coupling: two modules share the same global data
Control coupling: one module controlling the flow of another, by passing it information on what to do e.g., passing a flag
Data coupling: one module sharing data with another module e.g. via passing parameters
External coupling: two modules share an externally imposed convention e.g., data formats, communication protocols, device interfaces.
Subclass coupling: a class inherits from another class. Note that a child class is coupled to the parent class but not the other way around.
Temporal coupling: two actions are bundled together just because they happen to occur at the same time e.g. extracting a contiguous block of code as a method although the code block contains statements unrelated to each other
Cohesion
What
Can explain cohesion
Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.
Higher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):
Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.
Lowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).
Lowers reusability of modules because they do not represent logical units of functionality.
How
Can increase cohesion
Cohesion can be present in many forms. Some examples:
Code related to a single concept is kept together, e.g. the Student component handles everything related to students.
Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.
Code that manipulates the same data structure is kept together, e.g. the GameArchive component handles everything related to the storage and retrieval of game sessions.
Suppose a Payroll application contains a class that deals with writing data to the database. If the class includes some code to show an error dialog to the user if the database is unreachable, that class is not cohesive because it seems to be interacting with the user as well as the database.
Can explain the Abstraction Occurrence design pattern
Context
There is a group of similar entities that appear to be ‘occurrences’ (or ‘copies’) of the same thing, sharing lots of common information, but also differing in significant ways.
In a library, there can be multiple copies of the same book title. Each copy shares common information such as book title, author, ISBN, etc. However, there are also significant differences like purchase date and barcode number (assumed to be unique for each copy of the book).
Other examples:
Episodes of the same TV series
Stock items of the same product model (e.g. TV sets of the same model)
Problem
Representing the objects mentioned previously as a single class would be problematic because it results in duplication of data which can lead to inconsistencies in data (if some of the duplicates are not updated consistently).
Take for example the problem of representing books in a library. Assume that there could be multiple copies of the same title, bearing the same ISBN number, but different serial numbers.
The above solution requires common information to be duplicated by all instances. This will not only waste storage space, but also creates a consistency problem. Suppose that after creating several copies of the same title, the librarian realized that the author name was wrongly spelt. To correct this mistake, the system needs to go through every copy of the same title to make the correction. Also, if a new copy of the title is added later on, the librarian (or the system) has to make sure that all information entered is the same as the existing copies to avoid inconsistency.
Anti-pattern
Refer to the same Library example given above.
The design above segregates the common and unique information into a class hierarchy. Each book title is represented by a separate class with common data (i.e. Name, Author, ISBN) hard-coded in the class itself. This solution is problematic because each book title is represented as a class, resulting in thousands of classes (one for each title). Every time the library buys new books, the source code of the system will have to be updated with new classes.
Solution
Let a copy of an entity (e.g. a copy of a book) be represented by two objects instead of one, separating the common and unique information into two classes to avoid duplication.
Given below is how the pattern is applied to the Library example:
Here's a more generic example:
The general solution:
The <<Abstraction>> class should hold all common information, and the unique information should be kept by the <<Occurrence>> class. Note that ‘Abstraction’ and ‘Occurrence’ are not class names, but roles played by each class. Think of this diagram as a meta-model (i.e. a ‘model of a model’) of the BookTitle-BookCopy class diagram given above.
Exercises:
Which situations match the pattern?
Apply pattern?
+
Abstraction occurrence pattern
What
Can explain the Abstraction Occurrence design pattern
Context
There is a group of similar entities that appear to be ‘occurrences’ (or ‘copies’) of the same thing, sharing lots of common information, but also differing in significant ways.
In a library, there can be multiple copies of the same book title. Each copy shares common information such as book title, author, ISBN, etc. However, there are also significant differences like purchase date and barcode number (assumed to be unique for each copy of the book).
Other examples:
Episodes of the same TV series
Stock items of the same product model (e.g. TV sets of the same model)
Problem
Representing the objects mentioned previously as a single class would be problematic because it results in duplication of data which can lead to inconsistencies in data (if some of the duplicates are not updated consistently).
Take for example the problem of representing books in a library. Assume that there could be multiple copies of the same title, bearing the same ISBN number, but different serial numbers.
The above solution requires common information to be duplicated by all instances. This will not only waste storage space, but also creates a consistency problem. Suppose that after creating several copies of the same title, the librarian realized that the author name was wrongly spelt. To correct this mistake, the system needs to go through every copy of the same title to make the correction. Also, if a new copy of the title is added later on, the librarian (or the system) has to make sure that all information entered is the same as the existing copies to avoid inconsistency.
Anti-pattern
Refer to the same Library example given above.
The design above segregates the common and unique information into a class hierarchy. Each book title is represented by a separate class with common data (i.e. Name, Author, ISBN) hard-coded in the class itself. This solution is problematic because each book title is represented as a class, resulting in thousands of classes (one for each title). Every time the library buys new books, the source code of the system will have to be updated with new classes.
Solution
Let a copy of an entity (e.g. a copy of a book) be represented by two objects instead of one, separating the common and unique information into two classes to avoid duplication.
Given below is how the pattern is applied to the Library example:
Here's a more generic example:
The general solution:
The <<Abstraction>> class should hold all common information, and the unique information should be kept by the <<Occurrence>> class. Note that ‘Abstraction’ and ‘Occurrence’ are not class names, but roles played by each class. Think of this diagram as a meta-model (i.e. a ‘model of a model’) of the BookTitle-BookCopy class diagram given above.
There is a group of similar entities that appear to be ‘occurrences’ (or ‘copies’) of the same thing, sharing lots of common information, but also differing in significant ways.
In a library, there can be multiple copies of the same book title. Each copy shares common information such as book title, author, ISBN, etc. However, there are also significant differences like purchase date and barcode number (assumed to be unique for each copy of the book).
Other examples:
Episodes of the same TV series
Stock items of the same product model (e.g. TV sets of the same model)
Problem
Representing the objects mentioned previously as a single class would be problematic because it results in duplication of data which can lead to inconsistencies in data (if some of the duplicates are not updated consistently).
Take for example the problem of representing books in a library. Assume that there could be multiple copies of the same title, bearing the same ISBN number, but different serial numbers.
The above solution requires common information to be duplicated by all instances. This will not only waste storage space, but also creates a consistency problem. Suppose that after creating several copies of the same title, the librarian realized that the author name was wrongly spelt. To correct this mistake, the system needs to go through every copy of the same title to make the correction. Also, if a new copy of the title is added later on, the librarian (or the system) has to make sure that all information entered is the same as the existing copies to avoid inconsistency.
Anti-pattern
Refer to the same Library example given above.
The design above segregates the common and unique information into a class hierarchy. Each book title is represented by a separate class with common data (i.e. Name, Author, ISBN) hard-coded in the class itself. This solution is problematic because each book title is represented as a class, resulting in thousands of classes (one for each title). Every time the library buys new books, the source code of the system will have to be updated with new classes.
Solution
Let a copy of an entity (e.g. a copy of a book) be represented by two objects instead of one, separating the common and unique information into two classes to avoid duplication.
Given below is how the pattern is applied to the Library example:
Here's a more generic example:
The general solution:
The <<Abstraction>> class should hold all common information, and the unique information should be kept by the <<Occurrence>> class. Note that ‘Abstraction’ and ‘Occurrence’ are not class names, but roles played by each class. Think of this diagram as a meta-model (i.e. a ‘model of a model’) of the BookTitle-BookCopy class diagram given above.
Exercises:
Which situations match the pattern?
Apply pattern?
+
What
Context
There is a group of similar entities that appear to be ‘occurrences’ (or ‘copies’) of the same thing, sharing lots of common information, but also differing in significant ways.
In a library, there can be multiple copies of the same book title. Each copy shares common information such as book title, author, ISBN, etc. However, there are also significant differences like purchase date and barcode number (assumed to be unique for each copy of the book).
Other examples:
Episodes of the same TV series
Stock items of the same product model (e.g. TV sets of the same model)
Problem
Representing the objects mentioned previously as a single class would be problematic because it results in duplication of data which can lead to inconsistencies in data (if some of the duplicates are not updated consistently).
Take for example the problem of representing books in a library. Assume that there could be multiple copies of the same title, bearing the same ISBN number, but different serial numbers.
The above solution requires common information to be duplicated by all instances. This will not only waste storage space, but also creates a consistency problem. Suppose that after creating several copies of the same title, the librarian realized that the author name was wrongly spelt. To correct this mistake, the system needs to go through every copy of the same title to make the correction. Also, if a new copy of the title is added later on, the librarian (or the system) has to make sure that all information entered is the same as the existing copies to avoid inconsistency.
Anti-pattern
Refer to the same Library example given above.
The design above segregates the common and unique information into a class hierarchy. Each book title is represented by a separate class with common data (i.e. Name, Author, ISBN) hard-coded in the class itself. This solution is problematic because each book title is represented as a class, resulting in thousands of classes (one for each title). Every time the library buys new books, the source code of the system will have to be updated with new classes.
Solution
Let a copy of an entity (e.g. a copy of a book) be represented by two objects instead of one, separating the common and unique information into two classes to avoid duplication.
Given below is how the pattern is applied to the Library example:
Here's a more generic example:
The general solution:
The <<Abstraction>> class should hold all common information, and the unique information should be kept by the <<Occurrence>> class. Note that ‘Abstraction’ and ‘Occurrence’ are not class names, but roles played by each class. Think of this diagram as a meta-model (i.e. a ‘model of a model’) of the BookTitle-BookCopy class diagram given above.
A system is required to execute a number of commands, each doing a different task. For example, a system might have to support Sort, List, Reset commands.
Problem
It is preferable that some part of the code executes these commands without having to know each command type. e.g., there can be a CommandQueue object that is responsible for queuing commands and executing them without knowledge of what each command does.
Solution
The essential element of this pattern is to have a general <<Command>> object that can be passed around, stored, executed, etc without knowing the type of command (i.e. via polymorphism).
Let us examine an example application of the pattern first:
In the example solution below, the CommandCreator creates List, Sort, and Reset Command objects and adds them to the CommandQueue object. The CommandQueue object treats them all as Command objects and performs the execute/undo operation on each of them without knowledge of the specific Command type. When executed, each Command object will access the DataStore object to carry out its task. The Command class can also be an abstract class or an interface.
The general form of the solution is as follows.
The <<Client>> creates a <<ConcreteCommand>> object, and passes it to the <<Invoker>>. The <<Invoker>> object treats all commands as a general <<Command>> type. <<Invoker>> issues a request by calling execute() on the command. If a command is undoable, <<ConcreteCommand>> will store the state for undoing the command prior to invoking execute(). In addition, the <<ConcreteCommand>> object may have to be linked to any <<Receiver>> of the command () before it is passed to the <<Invoker>>. Note that an application of the command pattern does not have to follow the structure given above.
+
Command pattern
What
Can explain the Command design pattern
Context
A system is required to execute a number of commands, each doing a different task. For example, a system might have to support Sort, List, Reset commands.
Problem
It is preferable that some part of the code executes these commands without having to know each command type. e.g., there can be a CommandQueue object that is responsible for queuing commands and executing them without knowledge of what each command does.
Solution
The essential element of this pattern is to have a general <<Command>> object that can be passed around, stored, executed, etc without knowing the type of command (i.e. via polymorphism).
Let us examine an example application of the pattern first:
In the example solution below, the CommandCreator creates List, Sort, and Reset Command objects and adds them to the CommandQueue object. The CommandQueue object treats them all as Command objects and performs the execute/undo operation on each of them without knowledge of the specific Command type. When executed, each Command object will access the DataStore object to carry out its task. The Command class can also be an abstract class or an interface.
The general form of the solution is as follows.
The <<Client>> creates a <<ConcreteCommand>> object, and passes it to the <<Invoker>>. The <<Invoker>> object treats all commands as a general <<Command>> type. <<Invoker>> issues a request by calling execute() on the command. If a command is undoable, <<ConcreteCommand>> will store the state for undoing the command prior to invoking execute(). In addition, the <<ConcreteCommand>> object may have to be linked to any <<Receiver>> of the command () before it is passed to the <<Invoker>>. Note that an application of the command pattern does not have to follow the structure given above.
A system is required to execute a number of commands, each doing a different task. For example, a system might have to support Sort, List, Reset commands.
Problem
It is preferable that some part of the code executes these commands without having to know each command type. e.g., there can be a CommandQueue object that is responsible for queuing commands and executing them without knowledge of what each command does.
Solution
The essential element of this pattern is to have a general <<Command>> object that can be passed around, stored, executed, etc without knowing the type of command (i.e. via polymorphism).
Let us examine an example application of the pattern first:
In the example solution below, the CommandCreator creates List, Sort, and Reset Command objects and adds them to the CommandQueue object. The CommandQueue object treats them all as Command objects and performs the execute/undo operation on each of them without knowledge of the specific Command type. When executed, each Command object will access the DataStore object to carry out its task. The Command class can also be an abstract class or an interface.
The general form of the solution is as follows.
The <<Client>> creates a <<ConcreteCommand>> object, and passes it to the <<Invoker>>. The <<Invoker>> object treats all commands as a general <<Command>> type. <<Invoker>> issues a request by calling execute() on the command. If a command is undoable, <<ConcreteCommand>> will store the state for undoing the command prior to invoking execute(). In addition, the <<ConcreteCommand>> object may have to be linked to any <<Receiver>> of the command () before it is passed to the <<Invoker>>. Note that an application of the command pattern does not have to follow the structure given above.
+
What
Context
A system is required to execute a number of commands, each doing a different task. For example, a system might have to support Sort, List, Reset commands.
Problem
It is preferable that some part of the code executes these commands without having to know each command type. e.g., there can be a CommandQueue object that is responsible for queuing commands and executing them without knowledge of what each command does.
Solution
The essential element of this pattern is to have a general <<Command>> object that can be passed around, stored, executed, etc without knowing the type of command (i.e. via polymorphism).
Let us examine an example application of the pattern first:
In the example solution below, the CommandCreator creates List, Sort, and Reset Command objects and adds them to the CommandQueue object. The CommandQueue object treats them all as Command objects and performs the execute/undo operation on each of them without knowledge of the specific Command type. When executed, each Command object will access the DataStore object to carry out its task. The Command class can also be an abstract class or an interface.
The general form of the solution is as follows.
The <<Client>> creates a <<ConcreteCommand>> object, and passes it to the <<Invoker>>. The <<Invoker>> object treats all commands as a general <<Command>> type. <<Invoker>> issues a request by calling execute() on the command. If a command is undoable, <<ConcreteCommand>> will store the state for undoing the command prior to invoking execute(). In addition, the <<ConcreteCommand>> object may have to be linked to any <<Receiver>> of the command () before it is passed to the <<Invoker>>. Note that an application of the command pattern does not have to follow the structure given above.
Components need to access functionality deep inside other components.
The UI component of a Library system might want to access functionality of the Book class contained inside the Logic component.
Problem
Access to the component should be allowed without exposing its internal details. e.g. the UI component should access the functionality of the Logic component without knowing that it contains a Book class within it.
Solution
Include a class that sits between the component internals and users of the component such that all access to the component happens through the Facade class.
The following class diagram applies the Facade pattern to the Library System example. The LibraryLogic class is the Facade class.
Exercises:
Is this Facade?
+
Facade pattern
What
Can explain the Facade design pattern
Context
Components need to access functionality deep inside other components.
The UI component of a Library system might want to access functionality of the Book class contained inside the Logic component.
Problem
Access to the component should be allowed without exposing its internal details. e.g. the UI component should access the functionality of the Logic component without knowing that it contains a Book class within it.
Solution
Include a class that sits between the component internals and users of the component such that all access to the component happens through the Facade class.
The following class diagram applies the Facade pattern to the Library System example. The LibraryLogic class is the Facade class.
Components need to access functionality deep inside other components.
The UI component of a Library system might want to access functionality of the Book class contained inside the Logic component.
Problem
Access to the component should be allowed without exposing its internal details. e.g. the UI component should access the functionality of the Logic component without knowing that it contains a Book class within it.
Solution
Include a class that sits between the component internals and users of the component such that all access to the component happens through the Facade class.
The following class diagram applies the Facade pattern to the Library System example. The LibraryLogic class is the Facade class.
Exercises:
Is this Facade?
+
What
Context
Components need to access functionality deep inside other components.
The UI component of a Library system might want to access functionality of the Book class contained inside the Logic component.
Problem
Access to the component should be allowed without exposing its internal details. e.g. the UI component should access the functionality of the Logic component without knowing that it contains a Book class within it.
Solution
Include a class that sits between the component internals and users of the component such that all access to the component happens through the Facade class.
The following class diagram applies the Facade pattern to the Library System example. The LibraryLogic class is the Facade class.
The common format to describe a pattern consists of the following components:
Context: The situation or scenario where the design problem is encountered.
Problem: The main difficulty to be resolved.
Solution: The core of the solution. It is important to note that the solution presented only includes the most general details, which may need further refinement for a specific context.
Anti-patterns (optional): Commonly used solutions, which are usually incorrect and/or inferior to the Design Pattern.
Consequences (optional): Identifying the pros and cons of applying the pattern.
Other useful information (optional): Code examples, known uses, other related patterns, etc.
Exercises:
Anti-patterns required?
+
Format
The common format to describe a pattern consists of the following components:
Context: The situation or scenario where the design problem is encountered.
Problem: The main difficulty to be resolved.
Solution: The core of the solution. It is important to note that the solution presented only includes the most general details, which may need further refinement for a specific context.
Anti-patterns (optional): Commonly used solutions, which are usually incorrect and/or inferior to the Design Pattern.
Consequences (optional): Identifying the pros and cons of applying the pattern.
Other useful information (optional): Code examples, known uses, other related patterns, etc.
The common format to describe a pattern consists of the following components:
Context: The situation or scenario where the design problem is encountered.
Problem: The main difficulty to be resolved.
Solution: The core of the solution. It is important to note that the solution presented only includes the most general details, which may need further refinement for a specific context.
Anti-patterns (optional): Commonly used solutions, which are usually incorrect and/or inferior to the Design Pattern.
Consequences (optional): Identifying the pros and cons of applying the pattern.
Other useful information (optional): Code examples, known uses, other related patterns, etc.
Exercises:
Anti-patterns required?
+
Introduction
What
Can explain design patterns
Design pattern: An elegant reusable solution to a commonly recurring problem within a given context in software design.
In software development, there are certain problems that recur in a certain context.
Some examples of recurring design problems:
Design Context
Recurring Problem
Assembling a system that makes use of other existing systems implemented using different technologies
What is the best architecture?
UI needs to be updated when the data in the application backend changes
How to initiate an update to the UI when data changes without coupling the backend to the UI?
The common format to describe a pattern consists of the following components:
Context: The situation or scenario where the design problem is encountered.
Problem: The main difficulty to be resolved.
Solution: The core of the solution. It is important to note that the solution presented only includes the most general details, which may need further refinement for a specific context.
Anti-patterns (optional): Commonly used solutions, which are usually incorrect and/or inferior to the Design Pattern.
Consequences (optional): Identifying the pros and cons of applying the pattern.
Other useful information (optional): Code examples, known uses, other related patterns, etc.
Can explain the Model View Controller (MVC) design pattern
Context
Most applications support storage/retrieval of information, displaying of information to the user (often via multiple UIs having different formats), and changing stored information based on external inputs.
Problem
The high coupling that can result from the interlinked nature of the features described above.
Solution
Decouple data, presentation, and control logic of an application by separating them into three different components: Model, View and Controller.
View: Displays data, interacts with the user, and pulls data from the model if necessary.
Controller: Detects UI events such as mouse clicks and button pushes, and takes follow up action. Updates/changes the model/view when necessary.
Model: Stores and maintains data. Updates the view if necessary.
The relationship between the components can be observed in the diagram below. Typically, the UI is the combination of View and Controller.
Given below is a concrete example of MVC applied to a student management system. In this scenario, the user is retrieving the data of a student.
In the diagram above, when the user clicks on a button using the UI, the ‘click’ event is caught and handled by the UiController. The ref frame indicates that the interactions within that frame have been extracted out to another separate sequence diagram.
Note that in a simple UI where there’s only one view, Controller and View can be combined as one class.
There are many variations of the MVC model used in different domains. For example, the one used in a desktop GUI could be different from the one used in a web application.
+
Model view controller (MVC) pattern
What
Can explain the Model View Controller (MVC) design pattern
Context
Most applications support storage/retrieval of information, displaying of information to the user (often via multiple UIs having different formats), and changing stored information based on external inputs.
Problem
The high coupling that can result from the interlinked nature of the features described above.
Solution
Decouple data, presentation, and control logic of an application by separating them into three different components: Model, View and Controller.
View: Displays data, interacts with the user, and pulls data from the model if necessary.
Controller: Detects UI events such as mouse clicks and button pushes, and takes follow up action. Updates/changes the model/view when necessary.
Model: Stores and maintains data. Updates the view if necessary.
The relationship between the components can be observed in the diagram below. Typically, the UI is the combination of View and Controller.
Given below is a concrete example of MVC applied to a student management system. In this scenario, the user is retrieving the data of a student.
In the diagram above, when the user clicks on a button using the UI, the ‘click’ event is caught and handled by the UiController. The ref frame indicates that the interactions within that frame have been extracted out to another separate sequence diagram.
Note that in a simple UI where there’s only one view, Controller and View can be combined as one class.
There are many variations of the MVC model used in different domains. For example, the one used in a desktop GUI could be different from the one used in a web application.
Can explain the Model View Controller (MVC) design pattern
Design → Design Patterns → MVC Pattern →
-
What
Context
Most applications support storage/retrieval of information, displaying of information to the user (often via multiple UIs having different formats), and changing stored information based on external inputs.
Problem
The high coupling that can result from the interlinked nature of the features described above.
Solution
Decouple data, presentation, and control logic of an application by separating them into three different components: Model, View and Controller.
View: Displays data, interacts with the user, and pulls data from the model if necessary.
Controller: Detects UI events such as mouse clicks and button pushes, and takes follow up action. Updates/changes the model/view when necessary.
Model: Stores and maintains data. Updates the view if necessary.
The relationship between the components can be observed in the diagram below. Typically, the UI is the combination of View and Controller.
Given below is a concrete example of MVC applied to a student management system. In this scenario, the user is retrieving the data of a student.
In the diagram above, when the user clicks on a button using the UI, the ‘click’ event is caught and handled by the UiController. The ref frame indicates that the interactions within that frame have been extracted out to another separate sequence diagram.
Note that in a simple UI where there’s only one view, Controller and View can be combined as one class.
There are many variations of the MVC model used in different domains. For example, the one used in a desktop GUI could be different from the one used in a web application.
+
What
Context
Most applications support storage/retrieval of information, displaying of information to the user (often via multiple UIs having different formats), and changing stored information based on external inputs.
Problem
The high coupling that can result from the interlinked nature of the features described above.
Solution
Decouple data, presentation, and control logic of an application by separating them into three different components: Model, View and Controller.
View: Displays data, interacts with the user, and pulls data from the model if necessary.
Controller: Detects UI events such as mouse clicks and button pushes, and takes follow up action. Updates/changes the model/view when necessary.
Model: Stores and maintains data. Updates the view if necessary.
The relationship between the components can be observed in the diagram below. Typically, the UI is the combination of View and Controller.
Given below is a concrete example of MVC applied to a student management system. In this scenario, the user is retrieving the data of a student.
In the diagram above, when the user clicks on a button using the UI, the ‘click’ event is caught and handled by the UiController. The ref frame indicates that the interactions within that frame have been extracted out to another separate sequence diagram.
Note that in a simple UI where there’s only one view, Controller and View can be combined as one class.
There are many variations of the MVC model used in different domains. For example, the one used in a desktop GUI could be different from the one used in a web application.
Design patterns are usually embedded in a larger design and sometimes applied in combination with other design patterns.
Let us look at a case study that shows how design patterns are used in the design of a class structure for a Stock Inventory System (SIS) for a shop. The shop sells appliances and accessories for the appliances. SIS simply stores information about each item in the store.
Use Cases:
Create a new item
View information about an item
Modify information about an item
View all available accessories for a given appliance
List all items in the store
SIS can be accessed using multiple terminals. Shop assistants use their own terminals to access SIS, while the shop manager’s terminal continuously displays a list of all items in the store. In the future, it is expected that suppliers of items use their own applications to connect to SIS to get real-time information about the current stock status. User authentication is not required for the current version, but may be required in the future.
A step by step explanation of the design is given below. Note that this is one out of many possible designs. Design patterns are also applied where appropriate.
A StockItem can be an Appliance or an Accessory.
To track that each Accessory is associated with the correct Appliance, consider the following alternative class structures.
The third one seems more appropriate (the second one is suitable if accessories can have accessories). Next, consider between keeping a list of Appliances, and a list of StockItems. Which is more appropriate?
The latter seems more suitable because it can handle both appliances and accessories the same way. Next, an abstraction occurrence pattern is applied to keep track of StockItems.
Note the inclusion of navigabilities. Here’s a sample object diagram based on the class model created thus far.
Next, apply the façade pattern to shield the SIS internals from the UI.
As UI consists of multiple views, the MVC pattern is applied here.
Some views need to be updated when the data changes; apply the Observer pattern here.
In addition, the Singleton pattern can be applied to the façade class.
+
Combining design patterns
Design patterns are usually embedded in a larger design and sometimes applied in combination with other design patterns.
Let us look at a case study that shows how design patterns are used in the design of a class structure for a Stock Inventory System (SIS) for a shop. The shop sells appliances and accessories for the appliances. SIS simply stores information about each item in the store.
Use Cases:
Create a new item
View information about an item
Modify information about an item
View all available accessories for a given appliance
List all items in the store
SIS can be accessed using multiple terminals. Shop assistants use their own terminals to access SIS, while the shop manager’s terminal continuously displays a list of all items in the store. In the future, it is expected that suppliers of items use their own applications to connect to SIS to get real-time information about the current stock status. User authentication is not required for the current version, but may be required in the future.
A step by step explanation of the design is given below. Note that this is one out of many possible designs. Design patterns are also applied where appropriate.
A StockItem can be an Appliance or an Accessory.
To track that each Accessory is associated with the correct Appliance, consider the following alternative class structures.
The third one seems more appropriate (the second one is suitable if accessories can have accessories). Next, consider between keeping a list of Appliances, and a list of StockItems. Which is more appropriate?
The latter seems more suitable because it can handle both appliances and accessories the same way. Next, an abstraction occurrence pattern is applied to keep track of StockItems.
Note the inclusion of navigabilities. Here’s a sample object diagram based on the class model created thus far.
Next, apply the façade pattern to shield the SIS internals from the UI.
As UI consists of multiple views, the MVC pattern is applied here.
Some views need to be updated when the data changes; apply the Observer pattern here.
In addition, the Singleton pattern can be applied to the façade class.
Design patterns provide a high-level vocabulary to talk about design.
Someone can say 'apply Observer pattern here' instead of having to describe the mechanics of the solution in detail.
Knowing more patterns is a way to become more ‘experienced’. Aim to learn at least the context and the problem of patterns so that when you encounter those problems you know where to look for a solution.
Some patterns are domain-specifice.g. patterns for distributed applications, some are created in-housee.g. patterns in the company/project and some can be self-createde.g. from past experience.
Be careful not to overuse patterns. Do not throw patterns at a problem at every opportunity. Patterns come with overhead such as adding more classes or increasing the levels of abstraction. Use them only when they are needed. Before applying a pattern, make sure that:
there is substantial improvement in the design, not just superficial.
the associated tradeoffs are carefully considered. There are times when a design pattern is not appropriate (or an overkill).
+
Using design patterns
Design patterns provide a high-level vocabulary to talk about design.
Someone can say 'apply Observer pattern here' instead of having to describe the mechanics of the solution in detail.
Knowing more patterns is a way to become more ‘experienced’. Aim to learn at least the context and the problem of patterns so that when you encounter those problems you know where to look for a solution.
Some patterns are domain-specifice.g. patterns for distributed applications, some are created in-housee.g. patterns in the company/project and some can be self-createde.g. from past experience.
Be careful not to overuse patterns. Do not throw patterns at a problem at every opportunity. Patterns come with overhead such as adding more classes or increasing the levels of abstraction. Use them only when they are needed. Before applying a pattern, make sure that:
there is substantial improvement in the design, not just superficial.
the associated tradeoffs are carefully considered. There are times when a design pattern is not appropriate (or an overkill).
Can differentiate between design patterns and principles
Design → Design Patterns →
-
Design patterns versus design principles
Design principles have varying degrees of formality – rules, opinions, rules of thumb, observations, and axioms. Compared to design patterns, principles are more general, have wider applicability, with correspondingly greater overlap among them.
+
Design patterns versus design principles
Design principles have varying degrees of formality – rules, opinions, rules of thumb, observations, and axioms. Compared to design patterns, principles are more general, have wider applicability, with correspondingly greater overlap among them.
effective in achieving its goal with minimal extra work
provides an easy way to access the singleton object from anywhere in the code base
Cons:
The singleton object acts like a global variable that increases coupling across the code base.
In testing, it is difficult to replace Singleton objects with stubs (static methods cannot be overridden).
In testing, singleton objects carry data from one test to another even when you want each test to be independent of the others.
Given that there are some significant cons, it is recommended that you apply the Singleton pattern when, in addition to requiring only one instance of a class, there is a risk of creating multiple objects by mistake, and creating such multiple objects has real negative consequences.
+
Evaluation
Pros:
easy to apply
effective in achieving its goal with minimal extra work
provides an easy way to access the singleton object from anywhere in the code base
Cons:
The singleton object acts like a global variable that increases coupling across the code base.
In testing, it is difficult to replace Singleton objects with stubs (static methods cannot be overridden).
In testing, singleton objects carry data from one test to another even when you want each test to be independent of the others.
Given that there are some significant cons, it is recommended that you apply the Singleton pattern when, in addition to requiring only one instance of a class, there is a risk of creating multiple objects by mistake, and creating such multiple objects has real negative consequences.
Certain classes should have no more than just one instance (e.g. the main controller class of the system). These single instances are commonly known as singletons.
Problem
A normal class can be instantiated multiple times by invoking the constructor.
Solution
Make the constructor of the singleton class private, because a public constructor will allow others to instantiate the class at will. Provide a public class-level method to access the single instance.
Example:
The <<Singleton>> in the class above uses the UML stereotype notation, which is used to (optionally) indicate the purpose or the role played by a UML element. In this example, the class Logic is playing the role of a Singleton class. The general format is <<role/purpose>>.
Exercises:
Statements about the Singleton pattern
+
What
Context
Certain classes should have no more than just one instance (e.g. the main controller class of the system). These single instances are commonly known as singletons.
Problem
A normal class can be instantiated multiple times by invoking the constructor.
Solution
Make the constructor of the singleton class private, because a public constructor will allow others to instantiate the class at will. Provide a public class-level method to access the single instance.
Example:
The <<Singleton>> in the class above uses the UML stereotype notation, which is used to (optionally) indicate the purpose or the role played by a UML element. In this example, the class Logic is playing the role of a Singleton class. The general format is <<role/purpose>>.
Here are some tips on writing effective documentation.
Use plenty of diagrams: It is not enough to explain something in words; complement it with visual illustrations (e.g. a UML diagram).
Use plenty of examples: When explaining algorithms, show a running example to illustrate each step of the algorithm, in parallel to worded explanations.
Use simple and direct explanations: Convoluted explanations and fancy words will annoy readers. Avoid long sentences.
Get rid of statements that do not add value: For example, 'We made sure our system works perfectly' (who didn't?), 'Component X has its own responsibilities' (of course it has!).
It is not a good idea to have separate sections for each type of artifact, such as 'use cases', 'sequence diagrams', 'activity diagrams', etc. Such a structure, coupled with the indiscriminate inclusion of diagrams without justifying their need, indicates a failure to understand the purpose of documentation. Include diagrams when they are needed to explain something. If you want to provide additional diagrams for completeness' sake, include them in the appendix as a reference.
Exercises:
Statements about documentation
+
How
Here are some tips on writing effective documentation.
Use plenty of diagrams: It is not enough to explain something in words; complement it with visual illustrations (e.g. a UML diagram).
Use plenty of examples: When explaining algorithms, show a running example to illustrate each step of the algorithm, in parallel to worded explanations.
Use simple and direct explanations: Convoluted explanations and fancy words will annoy readers. Avoid long sentences.
Get rid of statements that do not add value: For example, 'We made sure our system works perfectly' (who didn't?), 'Component X has its own responsibilities' (of course it has!).
It is not a good idea to have separate sections for each type of artifact, such as 'use cases', 'sequence diagrams', 'activity diagrams', etc. Such a structure, coupled with the indiscriminate inclusion of diagrams without justifying their need, indicates a failure to understand the purpose of documentation. Include diagrams when they are needed to explain something. If you want to provide additional diagrams for completeness' sake, include them in the appendix as a reference.
Can explain the need for comprehensibility in documents
Technical documents exist to help others understand technical details. Therefore, it is not enough for the documentation to be accurate and comprehensive; it should also be comprehensible.
How
Can write reasonably comprehensible developer documents
Here are some tips on writing effective documentation.
Use plenty of diagrams: It is not enough to explain something in words; complement it with visual illustrations (e.g. a UML diagram).
Use plenty of examples: When explaining algorithms, show a running example to illustrate each step of the algorithm, in parallel to worded explanations.
Use simple and direct explanations: Convoluted explanations and fancy words will annoy readers. Avoid long sentences.
Get rid of statements that do not add value: For example, 'We made sure our system works perfectly' (who didn't?), 'Component X has its own responsibilities' (of course it has!).
It is not a good idea to have separate sections for each type of artifact, such as 'use cases', 'sequence diagrams', 'activity diagrams', etc. Such a structure, coupled with the indiscriminate inclusion of diagrams without justifying their need, indicates a failure to understand the purpose of documentation. Include diagrams when they are needed to explain something. If you want to provide additional diagrams for completeness' sake, include them in the appendix as a reference.
Exercises:
Statements about documentation
+
Guideline: Aim for comprehensibility
What
Can explain the need for comprehensibility in documents
Technical documents exist to help others understand technical details. Therefore, it is not enough for the documentation to be accurate and comprehensive; it should also be comprehensible.
How
Can write reasonably comprehensible developer documents
Here are some tips on writing effective documentation.
Use plenty of diagrams: It is not enough to explain something in words; complement it with visual illustrations (e.g. a UML diagram).
Use plenty of examples: When explaining algorithms, show a running example to illustrate each step of the algorithm, in parallel to worded explanations.
Use simple and direct explanations: Convoluted explanations and fancy words will annoy readers. Avoid long sentences.
Get rid of statements that do not add value: For example, 'We made sure our system works perfectly' (who didn't?), 'Component X has its own responsibilities' (of course it has!).
It is not a good idea to have separate sections for each type of artifact, such as 'use cases', 'sequence diagrams', 'activity diagrams', etc. Such a structure, coupled with the indiscriminate inclusion of diagrams without justifying their need, indicates a failure to understand the purpose of documentation. Include diagrams when they are needed to explain something. If you want to provide additional diagrams for completeness' sake, include them in the appendix as a reference.
Technical documents exist to help others understand technical details. Therefore, it is not enough for the documentation to be accurate and comprehensive; it should also be comprehensible.
+
What
Technical documents exist to help others understand technical details. Therefore, it is not enough for the documentation to be accurate and comprehensive; it should also be comprehensible.
Anything that is already clear in the code need not be described in words. Instead, focus on providing higher level information that is not readily visible in the code or comments.
Refrain from duplicating chunks of text. When describing several similar algorithms/designs/APIs, etc., do not simply duplicate large chunks of text. Instead, describe the similarities in one place and emphasize only the differences in other places. It is very annoying to see pages and pages of similar text without any indication as to how they differ from each other.
+
How
Anything that is already clear in the code need not be described in words. Instead, focus on providing higher level information that is not readily visible in the code or comments.
Refrain from duplicating chunks of text. When describing several similar algorithms/designs/APIs, etc., do not simply duplicate large chunks of text. Instead, describe the similarities in one place and emphasize only the differences in other places. It is very annoying to see pages and pages of similar text without any indication as to how they differ from each other.
Can explain that documentation should be minimal yet sufficient
Aim for 'just enough' developer documentation.
Writing and maintaining developer documents is an overhead. You should try to minimize that overhead.
If the readers are developers who will eventually read the code, the documentation should complement the code and should provide only just enough guidance to get started.
How
Can write minimal yet sufficient documentation
Anything that is already clear in the code need not be described in words. Instead, focus on providing higher level information that is not readily visible in the code or comments.
Refrain from duplicating chunks of text. When describing several similar algorithms/designs/APIs, etc., do not simply duplicate large chunks of text. Instead, describe the similarities in one place and emphasize only the differences in other places. It is very annoying to see pages and pages of similar text without any indication as to how they differ from each other.
+
Guideline: Document minimally, but sufficiently
What
Can explain that documentation should be minimal yet sufficient
Aim for 'just enough' developer documentation.
Writing and maintaining developer documents is an overhead. You should try to minimize that overhead.
If the readers are developers who will eventually read the code, the documentation should complement the code and should provide only just enough guidance to get started.
How
Can write minimal yet sufficient documentation
Anything that is already clear in the code need not be described in words. Instead, focus on providing higher level information that is not readily visible in the code or comments.
Refrain from duplicating chunks of text. When describing several similar algorithms/designs/APIs, etc., do not simply duplicate large chunks of text. Instead, describe the similarities in one place and emphasize only the differences in other places. It is very annoying to see pages and pages of similar text without any indication as to how they differ from each other.
Writing and maintaining developer documents is an overhead. You should try to minimize that overhead.
If the readers are developers who will eventually read the code, the documentation should complement the code and should provide only just enough guidance to get started.
+
What
Aim for 'just enough' developer documentation.
Writing and maintaining developer documents is an overhead. You should try to minimize that overhead.
If the readers are developers who will eventually read the code, the documentation should complement the code and should provide only just enough guidance to get started.
The main advantage of the top-down approach is that the document is structured like an upside down tree (root at the top) and the reader can travel down a path she is interested in until she reaches the component she is interested to learn in-depth, without having to read the entire document or understand the whole system.
+
Why
The main advantage of the top-down approach is that the document is structured like an upside down tree (root at the top) and the reader can travel down a path she is interested in until she reaches the component she is interested to learn in-depth, without having to read the entire document or understand the whole system.
It is not necessary to be 100% defensive all the time. While defensive code may be less prone to be misused or abused, such code can also be more complicated and slower to run.
The suitable degree of defensiveness depends on many factors such as:
How critical is the system?
Will the code be used by programmers other than the author?
The level of programming language support for defensive programming
The overhead of being defensive
Exercises:
Defensive programming
+
When
It is not necessary to be 100% defensive all the time. While defensive code may be less prone to be misused or abused, such code can also be more complicated and slower to run.
The suitable degree of defensiveness depends on many factors such as:
How critical is the system?
Will the code be used by programmers other than the author?
The level of programming language support for defensive programming
is an approach for designing software that requires defining formal, precise and verifiable interface specifications for software components.
Suppose an operation is implemented with the behavior specified precisely in the API (preconditions, post conditions, exceptions etc.). When following the defensive approach, the code should first check if the preconditions have been met. Typically, exceptions are thrown if preconditions are violated. In contrast, the Design-by-Contract (DbC) approach to coding assumes that it is the responsibility of the caller to ensure all preconditions are met. The operation will honor the contract only if the preconditions have been met. If any of them have not been met, the behavior of the operation is "unspecified".
Languages such as Eiffel have native support for DbC. For example, preconditions of an operation can be specified in Eiffel and the language runtime will check precondition violations without the need to do it explicitly in the code. To follow the DbC approach in languages such as Java and C++ where there is no built-in DbC support, assertions can be used to confirm pre-conditions.
Exercises:
Statements about the Design-by-contract approach
+
Design-by-contract approach
Design by contract
Can explain the Design-by-Contract approach
is an approach for designing software that requires defining formal, precise and verifiable interface specifications for software components.
Suppose an operation is implemented with the behavior specified precisely in the API (preconditions, post conditions, exceptions etc.). When following the defensive approach, the code should first check if the preconditions have been met. Typically, exceptions are thrown if preconditions are violated. In contrast, the Design-by-Contract (DbC) approach to coding assumes that it is the responsibility of the caller to ensure all preconditions are met. The operation will honor the contract only if the preconditions have been met. If any of them have not been met, the behavior of the operation is "unspecified".
Languages such as Eiffel have native support for DbC. For example, preconditions of an operation can be specified in Eiffel and the language runtime will check precondition violations without the need to do it explicitly in the code. To follow the DbC approach in languages such as Java and C++ where there is no built-in DbC support, assertions can be used to confirm pre-conditions.
Implementation → Error Handling → Design by Contract →
-
Design by contract
is an approach for designing software that requires defining formal, precise and verifiable interface specifications for software components.
Suppose an operation is implemented with the behavior specified precisely in the API (preconditions, post conditions, exceptions etc.). When following the defensive approach, the code should first check if the preconditions have been met. Typically, exceptions are thrown if preconditions are violated. In contrast, the Design-by-Contract (DbC) approach to coding assumes that it is the responsibility of the caller to ensure all preconditions are met. The operation will honor the contract only if the preconditions have been met. If any of them have not been met, the behavior of the operation is "unspecified".
Languages such as Eiffel have native support for DbC. For example, preconditions of an operation can be specified in Eiffel and the language runtime will check precondition violations without the need to do it explicitly in the code. To follow the DbC approach in languages such as Java and C++ where there is no built-in DbC support, assertions can be used to confirm pre-conditions.
Exercises:
Statements about the Design-by-contract approach
+
Design by contract
is an approach for designing software that requires defining formal, precise and verifiable interface specifications for software components.
Suppose an operation is implemented with the behavior specified precisely in the API (preconditions, post conditions, exceptions etc.). When following the defensive approach, the code should first check if the preconditions have been met. Typically, exceptions are thrown if preconditions are violated. In contrast, the Design-by-Contract (DbC) approach to coding assumes that it is the responsibility of the caller to ensure all preconditions are met. The operation will honor the contract only if the preconditions have been met. If any of them have not been met, the behavior of the operation is "unspecified".
Languages such as Eiffel have native support for DbC. For example, preconditions of an operation can be specified in Eiffel and the language runtime will check precondition violations without the need to do it explicitly in the code. To follow the DbC approach in languages such as Java and C++ where there is no built-in DbC support, assertions can be used to confirm pre-conditions.
Can explain how exception handling is done typically
Implementation → Error Handling → Exceptions →
-
How
Most languages allow code that encountered an "exceptional" situation to encapsulate details of the situation in an Exception object and throw/raise that object so that another piece of code can catch it and deal with it. This is especially useful when the code that encountered the unusual situation does not know how to deal with it.
The extract below from the -- Java Tutorial (with slight adaptations) explains how exceptions are typically handled.
When an error occurs at some point in the execution, the code being executed creates an exception object and hands it off to the runtime system. The exception object contains information about the error, including its type and the state of the program when the error occurred. Creating an exception object and handing it to the runtime system is called throwing an exception.
After a method throws an exception, the runtime system attempts to find something to handle it in the . The runtime system searches the call stack for a method that contains a block of code that can handle the exception. This block of code is called an exception handler. The search begins with the method in which the error occurred and proceeds through the call stack in the reverse order in which the methods were called. When an appropriate handler is found, the runtime system passes the exception to the handler. An exception handler is considered appropriate if the type of the exception object thrown matches the type that can be handled by the handler.
The exception handler chosen is said to catch the exception. If the runtime system exhaustively searches all the methods on the call stack without finding an appropriate exception handler, the program terminates.
Advantages of exception handling in this way:
The ability to propagate error information through the call stack.
The separation of code that deals with 'unusual' situations from the code that does the 'usual' work.
Exercises:
Benefits of exceptions
+
How
Most languages allow code that encountered an "exceptional" situation to encapsulate details of the situation in an Exception object and throw/raise that object so that another piece of code can catch it and deal with it. This is especially useful when the code that encountered the unusual situation does not know how to deal with it.
The extract below from the -- Java Tutorial (with slight adaptations) explains how exceptions are typically handled.
When an error occurs at some point in the execution, the code being executed creates an exception object and hands it off to the runtime system. The exception object contains information about the error, including its type and the state of the program when the error occurred. Creating an exception object and handing it to the runtime system is called throwing an exception.
After a method throws an exception, the runtime system attempts to find something to handle it in the . The runtime system searches the call stack for a method that contains a block of code that can handle the exception. This block of code is called an exception handler. The search begins with the method in which the error occurred and proceeds through the call stack in the reverse order in which the methods were called. When an appropriate handler is found, the runtime system passes the exception to the handler. An exception handler is considered appropriate if the type of the exception object thrown matches the type that can be handled by the handler.
The exception handler chosen is said to catch the exception. If the runtime system exhaustively searches all the methods on the call stack without finding an appropriate exception handler, the program terminates.
Advantages of exception handling in this way:
The ability to propagate error information through the call stack.
The separation of code that deals with 'unusual' situations from the code that does the 'usual' work.
Exceptions are used to deal with 'unusual' but not entirely unexpected situations that the program might encounter at runtime.
Exception:
The term exception is shorthand for the phrase "exceptional event." An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. –- Java Tutorial (Oracle Inc.)
Examples:
A network connection encounters a timeout due to a slow server.
The code tries to read a file from the hard disk but the file is corrupted and cannot be read.
How
Can explain how exception handling is done typically
Most languages allow code that encountered an "exceptional" situation to encapsulate details of the situation in an Exception object and throw/raise that object so that another piece of code can catch it and deal with it. This is especially useful when the code that encountered the unusual situation does not know how to deal with it.
The extract below from the -- Java Tutorial (with slight adaptations) explains how exceptions are typically handled.
When an error occurs at some point in the execution, the code being executed creates an exception object and hands it off to the runtime system. The exception object contains information about the error, including its type and the state of the program when the error occurred. Creating an exception object and handing it to the runtime system is called throwing an exception.
After a method throws an exception, the runtime system attempts to find something to handle it in the . The runtime system searches the call stack for a method that contains a block of code that can handle the exception. This block of code is called an exception handler. The search begins with the method in which the error occurred and proceeds through the call stack in the reverse order in which the methods were called. When an appropriate handler is found, the runtime system passes the exception to the handler. An exception handler is considered appropriate if the type of the exception object thrown matches the type that can be handled by the handler.
The exception handler chosen is said to catch the exception. If the runtime system exhaustively searches all the methods on the call stack without finding an appropriate exception handler, the program terminates.
Advantages of exception handling in this way:
The ability to propagate error information through the call stack.
The separation of code that deals with 'unusual' situations from the code that does the 'usual' work.
Exercises:
Benefits of exceptions
When
Can avoid using exceptions to control normal workflow
In general, use exceptions only for 'unusual' conditions. Use normal return statements to pass control to the caller for conditions that are 'normal'.
+
Exceptions
What
Can explain exceptions
Exceptions are used to deal with 'unusual' but not entirely unexpected situations that the program might encounter at runtime.
Exception:
The term exception is shorthand for the phrase "exceptional event." An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. –- Java Tutorial (Oracle Inc.)
Examples:
A network connection encounters a timeout due to a slow server.
The code tries to read a file from the hard disk but the file is corrupted and cannot be read.
How
Can explain how exception handling is done typically
Most languages allow code that encountered an "exceptional" situation to encapsulate details of the situation in an Exception object and throw/raise that object so that another piece of code can catch it and deal with it. This is especially useful when the code that encountered the unusual situation does not know how to deal with it.
The extract below from the -- Java Tutorial (with slight adaptations) explains how exceptions are typically handled.
When an error occurs at some point in the execution, the code being executed creates an exception object and hands it off to the runtime system. The exception object contains information about the error, including its type and the state of the program when the error occurred. Creating an exception object and handing it to the runtime system is called throwing an exception.
After a method throws an exception, the runtime system attempts to find something to handle it in the . The runtime system searches the call stack for a method that contains a block of code that can handle the exception. This block of code is called an exception handler. The search begins with the method in which the error occurred and proceeds through the call stack in the reverse order in which the methods were called. When an appropriate handler is found, the runtime system passes the exception to the handler. An exception handler is considered appropriate if the type of the exception object thrown matches the type that can be handled by the handler.
The exception handler chosen is said to catch the exception. If the runtime system exhaustively searches all the methods on the call stack without finding an appropriate exception handler, the program terminates.
Advantages of exception handling in this way:
The ability to propagate error information through the call stack.
The separation of code that deals with 'unusual' situations from the code that does the 'usual' work.
Exercises:
Benefits of exceptions
When
Can avoid using exceptions to control normal workflow
In general, use exceptions only for 'unusual' conditions. Use normal return statements to pass control to the caller for conditions that are 'normal'.
Exceptions are used to deal with 'unusual' but not entirely unexpected situations that the program might encounter at runtime.
Exception:
The term exception is shorthand for the phrase "exceptional event." An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. –- Java Tutorial (Oracle Inc.)
Examples:
A network connection encounters a timeout due to a slow server.
The code tries to read a file from the hard disk but the file is corrupted and cannot be read.
+
What
Exceptions are used to deal with 'unusual' but not entirely unexpected situations that the program might encounter at runtime.
Exception:
The term exception is shorthand for the phrase "exceptional event." An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. –- Java Tutorial (Oracle Inc.)
Examples:
A network connection encounters a timeout due to a slow server.
The code tries to read a file from the hard disk but the file is corrupted and cannot be read.
Well-written applications include error-handling code that allows them to recover gracefully from unexpected errors. When an error occurs, the application may need to request user intervention, or it may be able to recover on its own. In extreme cases, the application may log the user off or shut down the system. -- Microsoft
+
Introduction
What
Can explain error handling
Well-written applications include error-handling code that allows them to recover gracefully from unexpected errors. When an error occurs, the application may need to request user intervention, or it may be able to recover on its own. In extreme cases, the application may log the user off or shut down the system. -- Microsoft
Well-written applications include error-handling code that allows them to recover gracefully from unexpected errors. When an error occurs, the application may need to request user intervention, or it may be able to recover on its own. In extreme cases, the application may log the user off or shut down the system. -- Microsoft
+
What
Well-written applications include error-handling code that allows them to recover gracefully from unexpected errors. When an error occurs, the application may need to request user intervention, or it may be able to recover on its own. In extreme cases, the application may log the user off or shut down the system. -- Microsoft
Logging is the deliberate recording of certain information during a program execution for future reference. Logs are typically written to a log file but it is also possible to log information in other ways e.g. into a database or a remote server.
Logging can be useful for troubleshooting problems. A good logging system records some system information regularly. When bad things happen to a system e.g. an unanticipated failure, their associated log files may provide indications of what went wrong and actions can then be taken to prevent it from happening again.
A log file is like the of an airplane; they don't prevent problems but they can be helpful in understanding what went wrong after the fact.
Logging is the deliberate recording of certain information during a program execution for future reference. Logs are typically written to a log file but it is also possible to log information in other ways e.g. into a database or a remote server.
Logging can be useful for troubleshooting problems. A good logging system records some system information regularly. When bad things happen to a system e.g. an unanticipated failure, their associated log files may provide indications of what went wrong and actions can then be taken to prevent it from happening again.
A log file is like the of an airplane; they don't prevent problems but they can be helpful in understanding what went wrong after the fact.
Brainstorming: A group activity designed to generate a large number of diverse and creative ideas for the solution of a problem.
In a brainstorming session there are no "bad" ideas. The aim is to generate ideas; not to validate them. Brainstorming encourages you to "think outside the box" and put "crazy" ideas on the table without fear of rejection.
Exercises:
Characteristic of brainstorming
+
Brainstorming
Brainstorming: A group activity designed to generate a large number of diverse and creative ideas for the solution of a problem.
In a brainstorming session there are no "bad" ideas. The aim is to generate ideas; not to validate them. Brainstorming encourages you to "think outside the box" and put "crazy" ideas on the table without fear of rejection.
Focus groups are a kind of informal interview within an interactive group setting. A group of people (e.g. potential users, beta testers) are asked about their understanding of a specific issue, process, product, advertisement, etc.
Focus groups are a kind of informal interview within an interactive group setting. A group of people (e.g. potential users, beta testers) are asked about their understanding of a specific issue, process, product, advertisement, etc.
Brainstorming: A group activity designed to generate a large number of diverse and creative ideas for the solution of a problem.
In a brainstorming session there are no "bad" ideas. The aim is to generate ideas; not to validate them. Brainstorming encourages you to "think outside the box" and put "crazy" ideas on the table without fear of rejection.
Exercises:
Characteristic of brainstorming
User surveys
Can explain user surveys
Surveys can be used to solicit responses and opinions from a large number of stakeholders regarding a current product or a new product.
Observation
Can explain observation
Observing users in their natural work environment can uncover product requirements. Usage data of an existing system can also be used to gather information about how an existing system is being used, which can help in building a better replacement e.g. to find the situations where the user makes mistakes when using the current system.
Interviews
Can explain interviews
Interviewing stakeholders and domain experts can produce useful information about project requirements.
Focus groups are a kind of informal interview within an interactive group setting. A group of people (e.g. potential users, beta testers) are asked about their understanding of a specific issue, process, product, advertisement, etc.
: How do focus groups work? - Hector Lanz extra
Prototyping
Can explain prototyping
Prototype: A prototype is a mock up, a scaled down version, or a partial system constructed
to get users’ feedback.
to validate a technical concept (a "proof-of-concept" prototype).
to give a preview of what is to come, or to compare multiple alternatives on a small scale before committing fully to one alternative.
for early field-testing under controlled conditions.
Prototyping can uncover requirements, in particular, those related to how users interact with the system. UI prototypes or mock ups are often used in brainstorming sessions, or in meetings with the users to get quick feedback from them.
A mock up (also called a wireframe diagram) of a dialog box:
[source: plantuml.com]
Prototyping can be used for discovering as well as specifying requirements e.g. a UI prototype can serve as a specification of what to build.
Product surveys
Can explain product surveys
Studying existing products can unearth shortcomings of existing solutions that can be addressed by a new product. Product manuals and other forms of documentation of an existing system can tell us how the existing solutions work.
When developing a game for a mobile device, a look at a similar PC game can give insight into the kind of features and interactions the mobile game can offer.
Brainstorming: A group activity designed to generate a large number of diverse and creative ideas for the solution of a problem.
In a brainstorming session there are no "bad" ideas. The aim is to generate ideas; not to validate them. Brainstorming encourages you to "think outside the box" and put "crazy" ideas on the table without fear of rejection.
Exercises:
Characteristic of brainstorming
User surveys
Can explain user surveys
Surveys can be used to solicit responses and opinions from a large number of stakeholders regarding a current product or a new product.
Observation
Can explain observation
Observing users in their natural work environment can uncover product requirements. Usage data of an existing system can also be used to gather information about how an existing system is being used, which can help in building a better replacement e.g. to find the situations where the user makes mistakes when using the current system.
Interviews
Can explain interviews
Interviewing stakeholders and domain experts can produce useful information about project requirements.
Focus groups are a kind of informal interview within an interactive group setting. A group of people (e.g. potential users, beta testers) are asked about their understanding of a specific issue, process, product, advertisement, etc.
: How do focus groups work? - Hector Lanz extra
Prototyping
Can explain prototyping
Prototype: A prototype is a mock up, a scaled down version, or a partial system constructed
to get users’ feedback.
to validate a technical concept (a "proof-of-concept" prototype).
to give a preview of what is to come, or to compare multiple alternatives on a small scale before committing fully to one alternative.
for early field-testing under controlled conditions.
Prototyping can uncover requirements, in particular, those related to how users interact with the system. UI prototypes or mock ups are often used in brainstorming sessions, or in meetings with the users to get quick feedback from them.
A mock up (also called a wireframe diagram) of a dialog box:
[source: plantuml.com]
Prototyping can be used for discovering as well as specifying requirements e.g. a UI prototype can serve as a specification of what to build.
Product surveys
Can explain product surveys
Studying existing products can unearth shortcomings of existing solutions that can be addressed by a new product. Product manuals and other forms of documentation of an existing system can tell us how the existing solutions work.
When developing a game for a mobile device, a look at a similar PC game can give insight into the kind of features and interactions the mobile game can offer.
Observing users in their natural work environment can uncover product requirements. Usage data of an existing system can also be used to gather information about how an existing system is being used, which can help in building a better replacement e.g. to find the situations where the user makes mistakes when using the current system.
+
Observation
Observing users in their natural work environment can uncover product requirements. Usage data of an existing system can also be used to gather information about how an existing system is being used, which can help in building a better replacement e.g. to find the situations where the user makes mistakes when using the current system.
Studying existing products can unearth shortcomings of existing solutions that can be addressed by a new product. Product manuals and other forms of documentation of an existing system can tell us how the existing solutions work.
When developing a game for a mobile device, a look at a similar PC game can give insight into the kind of features and interactions the mobile game can offer.
+
Product surveys
Studying existing products can unearth shortcomings of existing solutions that can be addressed by a new product. Product manuals and other forms of documentation of an existing system can tell us how the existing solutions work.
When developing a game for a mobile device, a look at a similar PC game can give insight into the kind of features and interactions the mobile game can offer.
Prototype: A prototype is a mock up, a scaled down version, or a partial system constructed
to get users’ feedback.
to validate a technical concept (a "proof-of-concept" prototype).
to give a preview of what is to come, or to compare multiple alternatives on a small scale before committing fully to one alternative.
for early field-testing under controlled conditions.
Prototyping can uncover requirements, in particular, those related to how users interact with the system. UI prototypes or mock ups are often used in brainstorming sessions, or in meetings with the users to get quick feedback from them.
A mock up (also called a wireframe diagram) of a dialog box:
[source: plantuml.com]
Prototyping can be used for discovering as well as specifying requirements e.g. a UI prototype can serve as a specification of what to build.
+
Prototyping
Prototype: A prototype is a mock up, a scaled down version, or a partial system constructed
to get users’ feedback.
to validate a technical concept (a "proof-of-concept" prototype).
to give a preview of what is to come, or to compare multiple alternatives on a small scale before committing fully to one alternative.
for early field-testing under controlled conditions.
Prototyping can uncover requirements, in particular, those related to how users interact with the system. UI prototypes or mock ups are often used in brainstorming sessions, or in meetings with the users to get quick feedback from them.
A mock up (also called a wireframe diagram) of a dialog box:
[source: plantuml.com]
Prototyping can be used for discovering as well as specifying requirements e.g. a UI prototype can serve as a specification of what to build.
Git can load a specific version of the history to the working directory. Note that if you have uncommitted changes in the working directory, you need to stash them first to prevent them from being overwritten.
Sourcetree
Double-click the commit you want to load to the working directory, or right-click on that commit and choose Checkout....
Click OK to the warning about ‘detached HEAD’ (similar to below).
The specified version is now loaded to the working folder, as indicated by the HEAD label. HEAD is a reference to the currently checked out commit.
If you checkout a commit that comes before the commit in which you added the .gitignore file, Git will now show ignored files as ‘unstaged modifications’ because at that stage Git hasn’t been told to ignore those files.
To go back to the latest commit, double-click it.
CLI
Use the checkout <commit-identifier> command to change the working directory to the state it was in at a specific past commit.
git checkout v1.0: loads the state as at commit tagged v1.0
git checkout 0023cdd: loads the state as at commit with the hash 0023cdd
git checkout HEAD~2: loads the state that is 2 commits behind the most recent commit
For now, you can ignore the warning about ‘detached HEAD’.
+
checkout: Retrieving a specific revision
Git can load a specific version of the history to the working directory. Note that if you have uncommitted changes in the working directory, you need to stash them first to prevent them from being overwritten.
Sourcetree
Double-click the commit you want to load to the working directory, or right-click on that commit and choose Checkout....
Click OK to the warning about ‘detached HEAD’ (similar to below).
The specified version is now loaded to the working folder, as indicated by the HEAD label. HEAD is a reference to the currently checked out commit.
If you checkout a commit that comes before the commit in which you added the .gitignore file, Git will now show ignored files as ‘unstaged modifications’ because at that stage Git hasn’t been told to ignore those files.
To go back to the latest commit, double-click it.
CLI
Use the checkout <commit-identifier> command to change the working directory to the state it was in at a specific past commit.
git checkout v1.0: loads the state as at commit tagged v1.0
git checkout 0023cdd: loads the state as at commit with the hash 0023cdd
git checkout HEAD~2: loads the state that is 2 commits behind the most recent commit
For now, you can ignore the warning about ‘detached HEAD’.
You can use Git's stash feature to temporarily shelve (or stash) changes you've made to your working copy so that you can work on something else, and then come back and re-apply the stashed changes later on. -- adapted from Atlassian
Sourcetree
Follow this article from Sourcetree creators. Note that the GUI shown in the article is slightly outdated but you should be able to map it to the current GUI.
You can use Git's stash feature to temporarily shelve (or stash) changes you've made to your working copy so that you can work on something else, and then come back and re-apply the stashed changes later on. -- adapted from Atlassian
Sourcetree
Follow this article from Sourcetree creators. Note that the GUI shown in the article is slightly outdated but you should be able to map it to the current GUI.
Professional software engineers often write code using Integrated Development Environments (IDEs). IDEs support most development-related work within the same tool (hence, the term integrated).
An IDE generally consists of:
A source code editor that includes features such as syntax coloring, auto-completion, easy code navigation, error highlighting, and code-snippet generation.
A compiler and/or an interpreter (together with other build automation support) that facilitates the compilation/linking/running/deployment of a program.
A debugger that allows the developer to execute the program one step at a time to observe the run-time behavior in order to locate bugs.
Other tools that aid various aspects of coding e.g. support for automated testing, drag-and-drop construction of UI components, version management support, simulation of the target runtime platform, and modeling support.
Examples of popular IDEs:
Java: Eclipse, IntelliJ IDEA, NetBeans
C#, C++: Visual Studio
Swift: XCode
Python: PyCharm
Some web-based IDEs have appeared in recent times too e.g., Amazon's Cloud9 IDE.
Some experienced developers, in particular those with a UNIX background, prefer lightweight yet powerful text editors with scripting capabilities (e.g. Emacs) over heavier IDEs.
Exercises:
Which of these are features available in IDEs?
+
Introduction
What
Can explain IDEs
Professional software engineers often write code using Integrated Development Environments (IDEs). IDEs support most development-related work within the same tool (hence, the term integrated).
An IDE generally consists of:
A source code editor that includes features such as syntax coloring, auto-completion, easy code navigation, error highlighting, and code-snippet generation.
A compiler and/or an interpreter (together with other build automation support) that facilitates the compilation/linking/running/deployment of a program.
A debugger that allows the developer to execute the program one step at a time to observe the run-time behavior in order to locate bugs.
Other tools that aid various aspects of coding e.g. support for automated testing, drag-and-drop construction of UI components, version management support, simulation of the target runtime platform, and modeling support.
Examples of popular IDEs:
Java: Eclipse, IntelliJ IDEA, NetBeans
C#, C++: Visual Studio
Swift: XCode
Python: PyCharm
Some web-based IDEs have appeared in recent times too e.g., Amazon's Cloud9 IDE.
Some experienced developers, in particular those with a UNIX background, prefer lightweight yet powerful text editors with scripting capabilities (e.g. Emacs) over heavier IDEs.
Professional software engineers often write code using Integrated Development Environments (IDEs). IDEs support most development-related work within the same tool (hence, the term integrated).
An IDE generally consists of:
A source code editor that includes features such as syntax coloring, auto-completion, easy code navigation, error highlighting, and code-snippet generation.
A compiler and/or an interpreter (together with other build automation support) that facilitates the compilation/linking/running/deployment of a program.
A debugger that allows the developer to execute the program one step at a time to observe the run-time behavior in order to locate bugs.
Other tools that aid various aspects of coding e.g. support for automated testing, drag-and-drop construction of UI components, version management support, simulation of the target runtime platform, and modeling support.
Examples of popular IDEs:
Java: Eclipse, IntelliJ IDEA, NetBeans
C#, C++: Visual Studio
Swift: XCode
Python: PyCharm
Some web-based IDEs have appeared in recent times too e.g., Amazon's Cloud9 IDE.
Some experienced developers, in particular those with a UNIX background, prefer lightweight yet powerful text editors with scripting capabilities (e.g. Emacs) over heavier IDEs.
Exercises:
Which of these are features available in IDEs?
+
What
Professional software engineers often write code using Integrated Development Environments (IDEs). IDEs support most development-related work within the same tool (hence, the term integrated).
An IDE generally consists of:
A source code editor that includes features such as syntax coloring, auto-completion, easy code navigation, error highlighting, and code-snippet generation.
A compiler and/or an interpreter (together with other build automation support) that facilitates the compilation/linking/running/deployment of a program.
A debugger that allows the developer to execute the program one step at a time to observe the run-time behavior in order to locate bugs.
Other tools that aid various aspects of coding e.g. support for automated testing, drag-and-drop construction of UI components, version management support, simulation of the target runtime platform, and modeling support.
Examples of popular IDEs:
Java: Eclipse, IntelliJ IDEA, NetBeans
C#, C++: Visual Studio
Swift: XCode
Python: PyCharm
Some web-based IDEs have appeared in recent times too e.g., Amazon's Cloud9 IDE.
Some experienced developers, in particular those with a UNIX background, prefer lightweight yet powerful text editors with scripting capabilities (e.g. Emacs) over heavier IDEs.
Can explain how integration approaches vary based on amount merged at a time
Implementation → Integration → Approaches →
-
Big-bang versus incremental integration
Big-bang integration: integrate all components at the same time.
Big-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.
Incremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way.
Here is an animation that compares the two approaches:
Exercises:
Big-bang integration in school projects
+
Big-bang versus incremental integration
Big-bang integration: integrate all components at the same time.
Big-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.
Incremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way.
Here is an animation that compares the two approaches:
Can explain how integration approaches vary based on timing and frequency
In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.
Late and one-time: wait till all components are completed and integrate all finished components near the end of the project.
This approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.
Early and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.
A can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system.
Here is an animation that compares the two approaches:
Big-bang versus incremental integration
Can explain how integration approaches vary based on amount merged at a time
Big-bang integration: integrate all components at the same time.
Big-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.
Incremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way.
Here is an animation that compares the two approaches:
Exercises:
Big-bang integration in school projects
Top-down versus bottom-up integration
Can explain how integration approaches vary based on the order of integration
Based on the order in which components are integrated, incremental integration can be done in three ways.
Top-down integration: higher-level components are integrated before bringing in the lower-level components. One advantage of this approach is that higher-level problems can be discovered early. One disadvantage is that this requires the use of stubs in place of lower level components until the real lower-level components are integrated into the system. Otherwise, higher-level components cannot function as they depend on lower level ones.
Bottom-up integration: the reverse of top-down integration. Note that when integrating lower level components, may be needed to test the integrated components because the UI may not be integrated yet, just like how top-down integration needs stubs.
Sandwich integration: a mix of the top-down and bottom-up approaches. The idea is to do both top-down and bottom-up so as to 'meet' in the middle.
Here is an animation that compares the three approaches:
Exercises:
Suggest an integration strategy
Integration order
+
Approaches
Late and one time versus early and frequent
Can explain how integration approaches vary based on timing and frequency
In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.
Late and one-time: wait till all components are completed and integrate all finished components near the end of the project.
This approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.
Early and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.
A can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system.
Here is an animation that compares the two approaches:
Big-bang versus incremental integration
Can explain how integration approaches vary based on amount merged at a time
Big-bang integration: integrate all components at the same time.
Big-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.
Incremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way.
Here is an animation that compares the two approaches:
Exercises:
Big-bang integration in school projects
Top-down versus bottom-up integration
Can explain how integration approaches vary based on the order of integration
Based on the order in which components are integrated, incremental integration can be done in three ways.
Top-down integration: higher-level components are integrated before bringing in the lower-level components. One advantage of this approach is that higher-level problems can be discovered early. One disadvantage is that this requires the use of stubs in place of lower level components until the real lower-level components are integrated into the system. Otherwise, higher-level components cannot function as they depend on lower level ones.
Bottom-up integration: the reverse of top-down integration. Note that when integrating lower level components, may be needed to test the integrated components because the UI may not be integrated yet, just like how top-down integration needs stubs.
Sandwich integration: a mix of the top-down and bottom-up approaches. The idea is to do both top-down and bottom-up so as to 'meet' in the middle.
Here is an animation that compares the three approaches:
Can explain how integration approaches vary based on timing and frequency
Implementation → Integration → Approaches →
-
Late and one time versus early and frequent
In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.
Late and one-time: wait till all components are completed and integrate all finished components near the end of the project.
This approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.
Early and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.
A can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system.
Here is an animation that compares the two approaches:
+
Late and one time versus early and frequent
In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.
Late and one-time: wait till all components are completed and integrate all finished components near the end of the project.
This approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.
Early and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.
A can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system.
Here is an animation that compares the two approaches:
Can explain how integration approaches vary based on the order of integration
Implementation → Integration → Approaches →
-
Top-down versus bottom-up integration
Based on the order in which components are integrated, incremental integration can be done in three ways.
Top-down integration: higher-level components are integrated before bringing in the lower-level components. One advantage of this approach is that higher-level problems can be discovered early. One disadvantage is that this requires the use of stubs in place of lower level components until the real lower-level components are integrated into the system. Otherwise, higher-level components cannot function as they depend on lower level ones.
Bottom-up integration: the reverse of top-down integration. Note that when integrating lower level components, may be needed to test the integrated components because the UI may not be integrated yet, just like how top-down integration needs stubs.
Sandwich integration: a mix of the top-down and bottom-up approaches. The idea is to do both top-down and bottom-up so as to 'meet' in the middle.
Here is an animation that compares the three approaches:
Exercises:
Suggest an integration strategy
Integration order
+
Top-down versus bottom-up integration
Based on the order in which components are integrated, incremental integration can be done in three ways.
Top-down integration: higher-level components are integrated before bringing in the lower-level components. One advantage of this approach is that higher-level problems can be discovered early. One disadvantage is that this requires the use of stubs in place of lower level components until the real lower-level components are integrated into the system. Otherwise, higher-level components cannot function as they depend on lower level ones.
Bottom-up integration: the reverse of top-down integration. Note that when integrating lower level components, may be needed to test the integrated components because the UI may not be integrated yet, just like how top-down integration needs stubs.
Sandwich integration: a mix of the top-down and bottom-up approaches. The idea is to do both top-down and bottom-up so as to 'meet' in the middle.
Here is an animation that compares the three approaches:
An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.
A natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.
An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.
A natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.
Build automation tools automate the steps of the build process, usually by means of build scripts.
In a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).
Some of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.
However, most big projects use specialized build tools to automate complex build processes.
Some other build tools: Grunt (JavaScript), Rake (Ruby)
Some build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.
Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too.
Can explain continuous integration and continuous deployment
An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.
A natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.
Build automation tools automate the steps of the build process, usually by means of build scripts.
In a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).
Some of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.
However, most big projects use specialized build tools to automate complex build processes.
Some other build tools: Grunt (JavaScript), Rake (Ruby)
Some build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.
Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too.
Can explain continuous integration and continuous deployment
An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.
A natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.
Build automation tools automate the steps of the build process, usually by means of build scripts.
In a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).
Some of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.
However, most big projects use specialized build tools to automate complex build processes.
Some other build tools: Grunt (JavaScript), Rake (Ruby)
Some build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.
Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too.
Build automation tools automate the steps of the build process, usually by means of build scripts.
In a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).
Some of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.
However, most big projects use specialized build tools to automate complex build processes.
Some other build tools: Grunt (JavaScript), Rake (Ruby)
Some build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.
Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too.
Combining parts of a software product to form a whole is called integration. It is also one of the most troublesome tasks and it rarely goes smoothly.
Approaches
Late and one time versus early and frequent
Can explain how integration approaches vary based on timing and frequency
In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.
Late and one-time: wait till all components are completed and integrate all finished components near the end of the project.
This approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.
Early and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.
A can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system.
Here is an animation that compares the two approaches:
Big-bang versus incremental integration
Can explain how integration approaches vary based on amount merged at a time
Big-bang integration: integrate all components at the same time.
Big-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.
Incremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way.
Here is an animation that compares the two approaches:
Exercises:
Big-bang integration in school projects
Top-down versus bottom-up integration
Can explain how integration approaches vary based on the order of integration
Based on the order in which components are integrated, incremental integration can be done in three ways.
Top-down integration: higher-level components are integrated before bringing in the lower-level components. One advantage of this approach is that higher-level problems can be discovered early. One disadvantage is that this requires the use of stubs in place of lower level components until the real lower-level components are integrated into the system. Otherwise, higher-level components cannot function as they depend on lower level ones.
Bottom-up integration: the reverse of top-down integration. Note that when integrating lower level components, may be needed to test the integrated components because the UI may not be integrated yet, just like how top-down integration needs stubs.
Sandwich integration: a mix of the top-down and bottom-up approaches. The idea is to do both top-down and bottom-up so as to 'meet' in the middle.
Here is an animation that compares the three approaches:
Exercises:
Suggest an integration strategy
Integration order
Build Automation
What
Can explain build automation tools
Build automation tools automate the steps of the build process, usually by means of build scripts.
In a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).
Some of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.
However, most big projects use specialized build tools to automate complex build processes.
Some other build tools: Grunt (JavaScript), Rake (Ruby)
Some build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.
Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too.
Can explain continuous integration and continuous deployment
An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.
A natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.
Combining parts of a software product to form a whole is called integration. It is also one of the most troublesome tasks and it rarely goes smoothly.
Approaches
Late and one time versus early and frequent
Can explain how integration approaches vary based on timing and frequency
In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.
Late and one-time: wait till all components are completed and integrate all finished components near the end of the project.
This approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.
Early and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.
A can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system.
Here is an animation that compares the two approaches:
Big-bang versus incremental integration
Can explain how integration approaches vary based on amount merged at a time
Big-bang integration: integrate all components at the same time.
Big-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.
Incremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way.
Here is an animation that compares the two approaches:
Exercises:
Big-bang integration in school projects
Top-down versus bottom-up integration
Can explain how integration approaches vary based on the order of integration
Based on the order in which components are integrated, incremental integration can be done in three ways.
Top-down integration: higher-level components are integrated before bringing in the lower-level components. One advantage of this approach is that higher-level problems can be discovered early. One disadvantage is that this requires the use of stubs in place of lower level components until the real lower-level components are integrated into the system. Otherwise, higher-level components cannot function as they depend on lower level ones.
Bottom-up integration: the reverse of top-down integration. Note that when integrating lower level components, may be needed to test the integrated components because the UI may not be integrated yet, just like how top-down integration needs stubs.
Sandwich integration: a mix of the top-down and bottom-up approaches. The idea is to do both top-down and bottom-up so as to 'meet' in the middle.
Here is an animation that compares the three approaches:
Exercises:
Suggest an integration strategy
Integration order
Build Automation
What
Can explain build automation tools
Build automation tools automate the steps of the build process, usually by means of build scripts.
In a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).
Some of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.
However, most big projects use specialized build tools to automate complex build processes.
Some other build tools: Grunt (JavaScript), Rake (Ruby)
Some build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.
Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too.
Can explain continuous integration and continuous deployment
An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.
A natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.
This video explains how to automate the 'Extract variable' refactoring using IntelliJ IDEA. Most other refactorings available work similarly. i.e. select the code to refactor → find the refactoring in the context menu or use the keyboard shortcut.
Here's another video explaining how to do some more useful refactorings in IntelliJ IDEA.
This video explains how to automate the 'Extract variable' refactoring using IntelliJ IDEA. Most other refactorings available work similarly. i.e. select the code to refactor → find the refactoring in the context menu or use the keyboard shortcut.
Here's another video explaining how to do some more useful refactorings in IntelliJ IDEA.
This video explains how to automate the 'Extract variable' refactoring using IntelliJ IDEA. Most other refactorings available work similarly. i.e. select the code to refactor → find the refactoring in the context menu or use the keyboard shortcut.
Here's another video explaining how to do some more useful refactorings in IntelliJ IDEA.
This video explains how to automate the 'Extract variable' refactoring using IntelliJ IDEA. Most other refactorings available work similarly. i.e. select the code to refactor → find the refactoring in the context menu or use the keyboard shortcut.
Here's another video explaining how to do some more useful refactorings in IntelliJ IDEA.
A class diagram is a model that represents a software design.
A model provides a simpler view of a complex entity because a model captures only a selected aspect. This omission of some aspects implies models are abstractions.
A class diagram captures the structure of the software design but not the behavior.
Multiple models of the same entity may be needed to capture it fully.
In addition to a class diagram (or even multiple class diagrams), a number of other diagrams may be needed to capture various interesting aspects of the software.
How
Can explain how models are used
In software development, models are useful in several ways:
a) To analyze a complex entity related to software development.
Some examples of using models for analysis:
Models of the can be built to aid the understanding of the problem to be solved.
When planning a software solution, models can be created to figure out how the solution is to be built. An architecture diagram is such a model.
b) To communicate information among stakeholders. Models can be used as a visual aid in discussions and documentation.
Some examples of using models to communicate:
You can use an architecture diagram to explain the high-level design of the software to developers.
A business analyst can use a use case diagram to explain to the customer the functionality of the system.
A class diagram can be reverse-engineered from code so as to help explain the design of a component to a new developer.
c) As a blueprint for creating software. Models can be used as instructions for building software.
Some examples of using models as blueprints:
A senior developer draws a class diagram to propose a design for an OOP software and passes it to a junior programmer to implement.
A software tool allows users to draw UML models using its interface and the tool automatically generates the code based on the model.
Model Driven Development extra
Exercises:
Statements about models
Explain usage of models in a class project
UML models
Can identify UML models
Unified Modeling Language (UML) is a graphical notation to describe various aspects of a software system. UML is the brainchild of three software modeling specialists James Rumbaugh, Grady Booch and Ivar Jacobson (also known as the Three Amigos). Each of them had developed their own notation for modeling software systems before joining forces to create a unified modeling language (hence, the term ‘Unified’ in UML). UML is currently the most commonly used modeling notation used in the software industry.
The following diagram uses the class diagram notation to show the different types of UML diagrams.
A class diagram is a model that represents a software design.
A model provides a simpler view of a complex entity because a model captures only a selected aspect. This omission of some aspects implies models are abstractions.
A class diagram captures the structure of the software design but not the behavior.
Multiple models of the same entity may be needed to capture it fully.
In addition to a class diagram (or even multiple class diagrams), a number of other diagrams may be needed to capture various interesting aspects of the software.
How
Can explain how models are used
In software development, models are useful in several ways:
a) To analyze a complex entity related to software development.
Some examples of using models for analysis:
Models of the can be built to aid the understanding of the problem to be solved.
When planning a software solution, models can be created to figure out how the solution is to be built. An architecture diagram is such a model.
b) To communicate information among stakeholders. Models can be used as a visual aid in discussions and documentation.
Some examples of using models to communicate:
You can use an architecture diagram to explain the high-level design of the software to developers.
A business analyst can use a use case diagram to explain to the customer the functionality of the system.
A class diagram can be reverse-engineered from code so as to help explain the design of a component to a new developer.
c) As a blueprint for creating software. Models can be used as instructions for building software.
Some examples of using models as blueprints:
A senior developer draws a class diagram to propose a design for an OOP software and passes it to a junior programmer to implement.
A software tool allows users to draw UML models using its interface and the tool automatically generates the code based on the model.
Model Driven Development extra
Exercises:
Statements about models
Explain usage of models in a class project
UML models
Can identify UML models
Unified Modeling Language (UML) is a graphical notation to describe various aspects of a software system. UML is the brainchild of three software modeling specialists James Rumbaugh, Grady Booch and Ivar Jacobson (also known as the Three Amigos). Each of them had developed their own notation for modeling software systems before joining forces to create a unified modeling language (hence, the term ‘Unified’ in UML). UML is currently the most commonly used modeling notation used in the software industry.
The following diagram uses the class diagram notation to show the different types of UML diagrams.
Unified Modeling Language (UML) is a graphical notation to describe various aspects of a software system. UML is the brainchild of three software modeling specialists James Rumbaugh, Grady Booch and Ivar Jacobson (also known as the Three Amigos). Each of them had developed their own notation for modeling software systems before joining forces to create a unified modeling language (hence, the term ‘Unified’ in UML). UML is currently the most commonly used modeling notation used in the software industry.
The following diagram uses the class diagram notation to show the different types of UML diagrams.
Unified Modeling Language (UML) is a graphical notation to describe various aspects of a software system. UML is the brainchild of three software modeling specialists James Rumbaugh, Grady Booch and Ivar Jacobson (also known as the Three Amigos). Each of them had developed their own notation for modeling software systems before joining forces to create a unified modeling language (hence, the term ‘Unified’ in UML). UML is currently the most commonly used modeling notation used in the software industry.
The following diagram uses the class diagram notation to show the different types of UML diagrams.
A class diagram is a model that represents a software design.
A model provides a simpler view of a complex entity because a model captures only a selected aspect. This omission of some aspects implies models are abstractions.
A class diagram captures the structure of the software design but not the behavior.
Multiple models of the same entity may be needed to capture it fully.
In addition to a class diagram (or even multiple class diagrams), a number of other diagrams may be needed to capture various interesting aspects of the software.
+
What
A model is a representation of something else.
A class diagram is a model that represents a software design.
A model provides a simpler view of a complex entity because a model captures only a selected aspect. This omission of some aspects implies models are abstractions.
A class diagram captures the structure of the software design but not the behavior.
Multiple models of the same entity may be needed to capture it fully.
In addition to a class diagram (or even multiple class diagrams), a number of other diagrams may be needed to capture various interesting aspects of the software.
Can use simple class diagrams and sequence diagrams to model an OO solution
Design → Modeling → Modeling a Solution →
-
Basic
As mentioned in [Design → Modeling → Modeling a Solution → Introduction], this is the Minesweeper design you have come up with so far. Our objective is to analyze, evaluate, and refine that design.
Let us start by modeling a sample interaction between the person playing the game and the TextUi object.
newgame and clear x y represent commands typed by the Player on the TextUi.
How does the TextUi object carry out the requests it has received from the player? It would need to interact with other objects of the system. Because the Logic class is the one that controls the game logic, the TextUi needs to collaborate with Logic to fulfill the newgame request. Let us extend the model to capture that interaction.
W = Width of the minefield; H = Height of the minefield
The above diagram assumes that W and H are the only information TextUi requires to display the minefield to the Player. Note that there could be other ways of doing this.
The Logic methods you conceptualized in our modeling so far are:
Now, let us look at what other objects and interactions are needed to support the newGame() operation. It is likely that a new Minefield object is created when the newGame() method is called.
Note that the behavior of the Minefield constructor has been abstracted away. It can be designed at a later stage.
Given below are the interactions between the player and the TextUi for the whole game.
Note that can be used when discovering/defining the architecture-level APIs.
Defining the architecture-level APIs for a small Tic-Tac-Toe game:
+
Basic
As mentioned in [Design → Modeling → Modeling a Solution → Introduction], this is the Minesweeper design you have come up with so far. Our objective is to analyze, evaluate, and refine that design.
Let us start by modeling a sample interaction between the person playing the game and the TextUi object.
newgame and clear x y represent commands typed by the Player on the TextUi.
How does the TextUi object carry out the requests it has received from the player? It would need to interact with other objects of the system. Because the Logic class is the one that controls the game logic, the TextUi needs to collaborate with Logic to fulfill the newgame request. Let us extend the model to capture that interaction.
W = Width of the minefield; H = Height of the minefield
The above diagram assumes that W and H are the only information TextUi requires to display the minefield to the Player. Note that there could be other ways of doing this.
The Logic methods you conceptualized in our modeling so far are:
Now, let us look at what other objects and interactions are needed to support the newGame() operation. It is likely that a new Minefield object is created when the newGame() method is called.
Note that the behavior of the Minefield constructor has been abstracted away. It can be designed at a later stage.
Given below are the interactions between the player and the TextUi for the whole game.
Note that can be used when discovering/defining the architecture-level APIs.
Defining the architecture-level APIs for a small Tic-Tac-Toe game:
Communication diagrams are like sequence diagrams but emphasize the data links between the various participants in the interaction rather than the sequence of interactions.
An example:
Adapted from: UML Distilled by Martin Fowler
+
Communication diagrams
Communication diagrams are like sequence diagrams but emphasize the data links between the various participants in the interaction rather than the sequence of interactions.
A State Machine Diagram models state-dependent behavior.
Consider how a CD player responds when the “eject CD” button is pushed:
If the CD tray is already open, it does nothing.
If the CD tray is already in the process of opening (opened half-way), it continues to open the CD tray.
If the CD tray is closed and the CD is being played, it stops playing and opens the CD tray.
If the CD tray is closed and CD is not being played, it simply opens the CD tray.
If the CD tray is already in the process of closing (closed half-way), it waits until the CD tray is fully closed and opens it immediately afterwards.
What this means is that the CD player’s response to pushing the “eject CD” button depends on what it was doing at the time of the event. More generally, the CD player’s response to the event received depends on its internal state. Such a behavior is called a state-dependent behavior.
Often, state-dependent behavior displayed by an object in a system is simple enough that it needs no extra attention; such a behavior can be as simple as a conditional behavior like if x > y, then x = x - y.
Occasionally, objects may exhibit state-dependent behavior that is complex enough such that it needs to be captured in a separate model. Such state-dependent behavior can be modeled using UML state machine diagrams (SMD for short, sometimes also called ‘state charts’, ‘state diagrams’ or ‘state machines’).
An SMD views the life-cycle of an object as consisting of a finite number of states where each state displays a unique behavior pattern. SMDs capture information such as the states an object can be in during its lifetime, how the object responds to various events while in each state, and how the object transits from one state to another. In contrast to sequence diagrams that capture object behavior one scenario at a time, SMDs capture the object’s behavior over its full life-cycle.
An SMD for the Minesweeper game.
+
State machine diagrams
A State Machine Diagram models state-dependent behavior.
Consider how a CD player responds when the “eject CD” button is pushed:
If the CD tray is already open, it does nothing.
If the CD tray is already in the process of opening (opened half-way), it continues to open the CD tray.
If the CD tray is closed and the CD is being played, it stops playing and opens the CD tray.
If the CD tray is closed and CD is not being played, it simply opens the CD tray.
If the CD tray is already in the process of closing (closed half-way), it waits until the CD tray is fully closed and opens it immediately afterwards.
What this means is that the CD player’s response to pushing the “eject CD” button depends on what it was doing at the time of the event. More generally, the CD player’s response to the event received depends on its internal state. Such a behavior is called a state-dependent behavior.
Often, state-dependent behavior displayed by an object in a system is simple enough that it needs no extra attention; such a behavior can be as simple as a conditional behavior like if x > y, then x = x - y.
Occasionally, objects may exhibit state-dependent behavior that is complex enough such that it needs to be captured in a separate model. Such state-dependent behavior can be modeled using UML state machine diagrams (SMD for short, sometimes also called ‘state charts’, ‘state diagrams’ or ‘state machines’).
An SMD views the life-cycle of an object as consisting of a finite number of states where each state displays a unique behavior pattern. SMDs capture information such as the states an object can be in during its lifetime, how the object responds to various events while in each state, and how the object transits from one state to another. In contrast to sequence diagrams that capture object behavior one scenario at a time, SMDs capture the object’s behavior over its full life-cycle.
The analysis process for identifying objects and object classes is recognized as one of the most difficult areas of object-oriented development. --Ian Sommerville, in the book Software Engineering
Sidebar: Domain Modeling
Domain modeling is modeling the i.e. to model how things actually work in the real world. Domain modeling is useful in understanding the problem domain, which is essential to the success of a project.
Domain modeling can be done using,
a domain-specific modeling notation if such a notation exists (e.g., a modeling notation specific to the banking domain might have elements to represent loans, accounts, transactions etc.),
or a general purpose modeling notation, such as UML** (e.g., you can use an activity diagram to model the workflow of processing a loan application),
or even other general purpose notations (e.g., you can use an organization chart to model the employee hierarchy of a company).
When building an OOP system, it makes sense to build OOP models of the problem domain, given OOP aspires to emulate the objects in the real world.
The UML model that captures class structures in the problem domain are called conceptual class diagrams. They are in fact a lighter version of class diagrams, and sometimes also called OO domain models (OODMs). The latter name is somewhat misleading as conceptual class diagrams (CCDs) are actually only one type of domain models that can model an OOP problem domain.
The CCD of a snakes and ladders game is given below.
Description: The snakes and ladders game is played by two or more players using a board and a die. The board has 100 squares marked 1 to 100. Each player owns one piece. Players take turns to throw the die and advance their piece by the number of squares they earned from the die throw. The board has a number of snakes. If a player’s piece lands on a square with a snake head, the piece is automatically moved to the square containing the snake’s tail. Similarly, a piece can automatically move from a ladder foot to the ladder top. The player whose piece is the first to reach the 100th square wins.
CCDs do not contain solution-specific classes (i.e. classes that are used in the solution domain but do not exist in the problem domain). For example, a class called DatabaseConnection could appear in a class diagram but not usually in a CCD because DatabaseConnection is something related to a software solution but not an entity in the problem domain.
CCDs represents the class structure of the problem domain and not their behavior, just like class diagrams. To show behavior, use other diagrams such as sequence diagrams.
CCD notation is a subset of the class diagram notation (omitsmethods and navigability).
Exercises:
This diagram is,...
Difference between a class diagram and CCD?
+
Conceptual Class Diagrams (aka OODMs)
The analysis process for identifying objects and object classes is recognized as one of the most difficult areas of object-oriented development. --Ian Sommerville, in the book Software Engineering
Sidebar: Domain Modeling
Domain modeling is modeling the i.e. to model how things actually work in the real world. Domain modeling is useful in understanding the problem domain, which is essential to the success of a project.
Domain modeling can be done using,
a domain-specific modeling notation if such a notation exists (e.g., a modeling notation specific to the banking domain might have elements to represent loans, accounts, transactions etc.),
or a general purpose modeling notation, such as UML** (e.g., you can use an activity diagram to model the workflow of processing a loan application),
or even other general purpose notations (e.g., you can use an organization chart to model the employee hierarchy of a company).
When building an OOP system, it makes sense to build OOP models of the problem domain, given OOP aspires to emulate the objects in the real world.
The UML model that captures class structures in the problem domain are called conceptual class diagrams. They are in fact a lighter version of class diagrams, and sometimes also called OO domain models (OODMs). The latter name is somewhat misleading as conceptual class diagrams (CCDs) are actually only one type of domain models that can model an OOP problem domain.
The CCD of a snakes and ladders game is given below.
Description: The snakes and ladders game is played by two or more players using a board and a die. The board has 100 squares marked 1 to 100. Each player owns one piece. Players take turns to throw the die and advance their piece by the number of squares they earned from the die throw. The board has a number of snakes. If a player’s piece lands on a square with a snake head, the piece is automatically moved to the square containing the snake’s tail. Similarly, a piece can automatically move from a ladder foot to the ladder top. The player whose piece is the first to reach the 100th square wins.
CCDs do not contain solution-specific classes (i.e. classes that are used in the solution domain but do not exist in the problem domain). For example, a class called DatabaseConnection could appear in a class diagram but not usually in a CCD because DatabaseConnection is something related to a software solution but not an entity in the problem domain.
CCDs represents the class structure of the problem domain and not their behavior, just like class diagrams. To show behavior, use other diagrams such as sequence diagrams.
CCD notation is a subset of the class diagram notation (omitsmethods and navigability).
An OO solution is basically a network of objects interacting with each other. Therefore, it is useful to be able to model how the relevant objects are 'networked' together inside a software i.e. how the objects are connected together.
Given below is an illustration of some objects and how they are connected together. Note: the diagram uses an ad-hoc notation.
Note that these object structures within the same software can change over time.
Given below is how the object structure in the previous example could have looked like at a different time.
However, object structures do not change at random; they change based on a set of rules set by the designer of that software. Those rules that object structures need to follow can be illustrated as a class structurei.e. a structure that exists among the relevant classes.
Here is a class structure (drawn using an ad-hoc notation) that matches the object structures given in the previous two examples. For example, note how this class structure does not allow any connection between Genre objects and Author objects, a rule followed by the two object structures above.
UML Object Diagrams model object structures. UML Class Diagrams model class structures.
Here is an object diagram for the above example:
And here is the class diagram for it:
+
OO structures
An OO solution is basically a network of objects interacting with each other. Therefore, it is useful to be able to model how the relevant objects are 'networked' together inside a software i.e. how the objects are connected together.
Given below is an illustration of some objects and how they are connected together. Note: the diagram uses an ad-hoc notation.
Note that these object structures within the same software can change over time.
Given below is how the object structure in the previous example could have looked like at a different time.
However, object structures do not change at random; they change based on a set of rules set by the designer of that software. Those rules that object structures need to follow can be illustrated as a class structurei.e. a structure that exists among the relevant classes.
Here is a class structure (drawn using an ad-hoc notation) that matches the object structures given in the previous two examples. For example, note how this class structure does not allow any connection between Genre objects and Author objects, a rule followed by the two object structures above.
UML Object Diagrams model object structures. UML Class Diagrams model class structures.
Objects in an OO solution need to be connected to each other to form a network so that they can interact with each other. Such connections between objects are called associations.
Suppose an OOP program for managing a learning management system creates an object structure to represent the related objects. In that object structure you can expect to have associations between a Course object that represents a specific course and Student objects that represent students taking that course.
Associations in an object structure can change over time.
To continue the previous example, the associations between a Course object and Student objects can change as students enroll in the course or drop the course over time.
Associations among objects can be generalized as associations between the corresponding classes too.
In our example, as some Course objects can have associations with some Student objects, you can view it as an association between the Course class and the Student class.
Implementing associations
You use instance level variables to implement associations.
In our example, the Course class can have a students variable to keeps track of students associated with a particular course.
+
Basic
Objects in an OO solution need to be connected to each other to form a network so that they can interact with each other. Such connections between objects are called associations.
Suppose an OOP program for managing a learning management system creates an object structure to represent the related objects. In that object structure you can expect to have associations between a Course object that represents a specific course and Student objects that represent students taking that course.
Associations in an object structure can change over time.
To continue the previous example, the associations between a Course object and Student objects can change as students enroll in the course or drop the course over time.
Associations among objects can be generalized as associations between the corresponding classes too.
In our example, as some Course objects can have associations with some Student objects, you can view it as an association between the Course class and the Student class.
Implementing associations
You use instance level variables to implement associations.
In our example, the Course class can have a students variable to keeps track of students associated with a particular course.
While all objects of a class have the same attributes, each object has its own copy of the attribute value.
All Person objects have the name attribute but the value of that attribute varies between Person objects.
However, some attributes are not suitable to be maintained by individual objects. Instead, they should be maintained centrally, shared by all objects of the class. They are like ‘global variables’ but attached to a specific class. Such variables whose value is shared by all instances of a class are called class-level attributes.
The attribute totalPersons should be maintained centrally and shared by all Person objects rather than copied at each Person object.
Similarly, when a normal method is being called, a message is being sent to the receiving object and the result may depend on the receiving object.
Sending the getName() message to the Adam object results in the response "Adam" while sending the same message to the Beth object results in the response "Beth".
However, there can be methods related to a specific class but not suitable for sending messages to a specific object of that class. Such methods that are called using the class instead of a specific instance are called class-level methods.
The method getTotalPersons() is not suitable to send to a specific Person object because a specific object of the Person class should not have to know about the total number of Person objects.
Class-level attributes and methods are collectively called class-level members (also called static members sometimes because some programming languages use the keyword static to identify class-level members). They are to be accessed using the class name rather than an instance of the class.
Exercises:
Suitable as class-level variables
+
Class-level members
While all objects of a class have the same attributes, each object has its own copy of the attribute value.
All Person objects have the name attribute but the value of that attribute varies between Person objects.
However, some attributes are not suitable to be maintained by individual objects. Instead, they should be maintained centrally, shared by all objects of the class. They are like ‘global variables’ but attached to a specific class. Such variables whose value is shared by all instances of a class are called class-level attributes.
The attribute totalPersons should be maintained centrally and shared by all Person objects rather than copied at each Person object.
Similarly, when a normal method is being called, a message is being sent to the receiving object and the result may depend on the receiving object.
Sending the getName() message to the Adam object results in the response "Adam" while sending the same message to the Beth object results in the response "Beth".
However, there can be methods related to a specific class but not suitable for sending messages to a specific object of that class. Such methods that are called using the class instead of a specific instance are called class-level methods.
The method getTotalPersons() is not suitable to send to a specific Person object because a specific object of the Person class should not have to know about the total number of Person objects.
Class-level attributes and methods are collectively called class-level members (also called static members sometimes because some programming languages use the keyword static to identify class-level members). They are to be accessed using the class name rather than an instance of the class.
An Enumeration is a fixed set of values that can be considered as a data type. An enumeration is often useful when using a regular data type such as int or String would allow invalid values to be assigned to a variable.
Suppose you want a variable called priority to store the priority of something. There are only three priority levels: high, medium, and low. You can declare the variable priority as of type int and use only values 2, 1, and 0 to indicate the three priority levels. However, this opens the possibility of an invalid value such as 9 being assigned to it. But if you define an enumeration type called Priority that has three values HIGH, MEDIUM and LOW only, a variable of type Priority will never be assigned an invalid value because the compiler is able to catch such an error.
Priority: HIGH, MEDIUM, LOW
+
Enumerations
An Enumeration is a fixed set of values that can be considered as a data type. An enumeration is often useful when using a regular data type such as int or String would allow invalid values to be assigned to a variable.
Suppose you want a variable called priority to store the priority of something. There are only three priority levels: high, medium, and low. You can declare the variable priority as of type int and use only values 2, 1, and 0 to indicate the three priority levels. However, this opens the possibility of an invalid value such as 9 being assigned to it. But if you define an enumeration type called Priority that has three values HIGH, MEDIUM and LOW only, a variable of type Priority will never be assigned an invalid value because the compiler is able to catch such an error.
Can explain the relationship between classes and objects
Writing an OOP program is essentially writing instructions that the computer will use to,
create the virtual world of the object network, and
provide it the inputs to produce the outcome you want.
A class contains instructions for creating a specific kind of objects. It turns out sometimes multiple objects keep the same type of data and have the same behavior because they are of the same kind. Instructions for creating a 'kind' (or ‘class’) of objects can be done once and those same instructions can be used to objects of that kind. We call such instructions a Class.
Classes and objects in an example scenario
Consider the example of writing an OOP program to calculate the average age of Adam, Beth, Charlie, and Daisy.
Instructions for creating objects Adam, Beth, Charlie, and Daisy will be very similar because they are all of the same kind: they all represent ‘persons’ with the same interface, the same kind of data (i.e. name, dateOfBirth, etc.), and the same kind of behavior (i.e. getAge(Date), getName(), etc.). Therefore, you can have a class called Person containing instructions on how to create Person objects and use that class to instantiate objects Adam, Beth, Charlie, and Daisy.
Similarly, you need classes AgeList, Calculator, and Main classes to instantiate one each of AgeList, Calculator, and Main objects.
Class
Objects
Person
objects representing Adam, Beth, Charlie, Daisy
AgeList
an object to represent the age list
Calculator
an object to do the calculations
Main
an object to represent you (i.e., the one who manages the whole operation)
Exercises:
Identify Classes and Objects
Classes for CityConnect app
Class-level members
Can explain class-level members
While all objects of a class have the same attributes, each object has its own copy of the attribute value.
All Person objects have the name attribute but the value of that attribute varies between Person objects.
However, some attributes are not suitable to be maintained by individual objects. Instead, they should be maintained centrally, shared by all objects of the class. They are like ‘global variables’ but attached to a specific class. Such variables whose value is shared by all instances of a class are called class-level attributes.
The attribute totalPersons should be maintained centrally and shared by all Person objects rather than copied at each Person object.
Similarly, when a normal method is being called, a message is being sent to the receiving object and the result may depend on the receiving object.
Sending the getName() message to the Adam object results in the response "Adam" while sending the same message to the Beth object results in the response "Beth".
However, there can be methods related to a specific class but not suitable for sending messages to a specific object of that class. Such methods that are called using the class instead of a specific instance are called class-level methods.
The method getTotalPersons() is not suitable to send to a specific Person object because a specific object of the Person class should not have to know about the total number of Person objects.
Class-level attributes and methods are collectively called class-level members (also called static members sometimes because some programming languages use the keyword static to identify class-level members). They are to be accessed using the class name rather than an instance of the class.
Exercises:
Suitable as class-level variables
Enumerations
Can explain the meaning of enumerations
An Enumeration is a fixed set of values that can be considered as a data type. An enumeration is often useful when using a regular data type such as int or String would allow invalid values to be assigned to a variable.
Suppose you want a variable called priority to store the priority of something. There are only three priority levels: high, medium, and low. You can declare the variable priority as of type int and use only values 2, 1, and 0 to indicate the three priority levels. However, this opens the possibility of an invalid value such as 9 being assigned to it. But if you define an enumeration type called Priority that has three values HIGH, MEDIUM and LOW only, a variable of type Priority will never be assigned an invalid value because the compiler is able to catch such an error.
Priority: HIGH, MEDIUM, LOW
+
Classes
What
Can explain the relationship between classes and objects
Writing an OOP program is essentially writing instructions that the computer will use to,
create the virtual world of the object network, and
provide it the inputs to produce the outcome you want.
A class contains instructions for creating a specific kind of objects. It turns out sometimes multiple objects keep the same type of data and have the same behavior because they are of the same kind. Instructions for creating a 'kind' (or ‘class’) of objects can be done once and those same instructions can be used to objects of that kind. We call such instructions a Class.
Classes and objects in an example scenario
Consider the example of writing an OOP program to calculate the average age of Adam, Beth, Charlie, and Daisy.
Instructions for creating objects Adam, Beth, Charlie, and Daisy will be very similar because they are all of the same kind: they all represent ‘persons’ with the same interface, the same kind of data (i.e. name, dateOfBirth, etc.), and the same kind of behavior (i.e. getAge(Date), getName(), etc.). Therefore, you can have a class called Person containing instructions on how to create Person objects and use that class to instantiate objects Adam, Beth, Charlie, and Daisy.
Similarly, you need classes AgeList, Calculator, and Main classes to instantiate one each of AgeList, Calculator, and Main objects.
Class
Objects
Person
objects representing Adam, Beth, Charlie, Daisy
AgeList
an object to represent the age list
Calculator
an object to do the calculations
Main
an object to represent you (i.e., the one who manages the whole operation)
Exercises:
Identify Classes and Objects
Classes for CityConnect app
Class-level members
Can explain class-level members
While all objects of a class have the same attributes, each object has its own copy of the attribute value.
All Person objects have the name attribute but the value of that attribute varies between Person objects.
However, some attributes are not suitable to be maintained by individual objects. Instead, they should be maintained centrally, shared by all objects of the class. They are like ‘global variables’ but attached to a specific class. Such variables whose value is shared by all instances of a class are called class-level attributes.
The attribute totalPersons should be maintained centrally and shared by all Person objects rather than copied at each Person object.
Similarly, when a normal method is being called, a message is being sent to the receiving object and the result may depend on the receiving object.
Sending the getName() message to the Adam object results in the response "Adam" while sending the same message to the Beth object results in the response "Beth".
However, there can be methods related to a specific class but not suitable for sending messages to a specific object of that class. Such methods that are called using the class instead of a specific instance are called class-level methods.
The method getTotalPersons() is not suitable to send to a specific Person object because a specific object of the Person class should not have to know about the total number of Person objects.
Class-level attributes and methods are collectively called class-level members (also called static members sometimes because some programming languages use the keyword static to identify class-level members). They are to be accessed using the class name rather than an instance of the class.
Exercises:
Suitable as class-level variables
Enumerations
Can explain the meaning of enumerations
An Enumeration is a fixed set of values that can be considered as a data type. An enumeration is often useful when using a regular data type such as int or String would allow invalid values to be assigned to a variable.
Suppose you want a variable called priority to store the priority of something. There are only three priority levels: high, medium, and low. You can declare the variable priority as of type int and use only values 2, 1, and 0 to indicate the three priority levels. However, this opens the possibility of an invalid value such as 9 being assigned to it. But if you define an enumeration type called Priority that has three values HIGH, MEDIUM and LOW only, a variable of type Priority will never be assigned an invalid value because the compiler is able to catch such an error.
Can explain the relationship between classes and objects
Paradigms → OOP → Classes →
-
What
Writing an OOP program is essentially writing instructions that the computer will use to,
create the virtual world of the object network, and
provide it the inputs to produce the outcome you want.
A class contains instructions for creating a specific kind of objects. It turns out sometimes multiple objects keep the same type of data and have the same behavior because they are of the same kind. Instructions for creating a 'kind' (or ‘class’) of objects can be done once and those same instructions can be used to objects of that kind. We call such instructions a Class.
Classes and objects in an example scenario
Consider the example of writing an OOP program to calculate the average age of Adam, Beth, Charlie, and Daisy.
Instructions for creating objects Adam, Beth, Charlie, and Daisy will be very similar because they are all of the same kind: they all represent ‘persons’ with the same interface, the same kind of data (i.e. name, dateOfBirth, etc.), and the same kind of behavior (i.e. getAge(Date), getName(), etc.). Therefore, you can have a class called Person containing instructions on how to create Person objects and use that class to instantiate objects Adam, Beth, Charlie, and Daisy.
Similarly, you need classes AgeList, Calculator, and Main classes to instantiate one each of AgeList, Calculator, and Main objects.
Class
Objects
Person
objects representing Adam, Beth, Charlie, Daisy
AgeList
an object to represent the age list
Calculator
an object to do the calculations
Main
an object to represent you (i.e., the one who manages the whole operation)
Exercises:
Identify Classes and Objects
Classes for CityConnect app
+
What
Writing an OOP program is essentially writing instructions that the computer will use to,
create the virtual world of the object network, and
provide it the inputs to produce the outcome you want.
A class contains instructions for creating a specific kind of objects. It turns out sometimes multiple objects keep the same type of data and have the same behavior because they are of the same kind. Instructions for creating a 'kind' (or ‘class’) of objects can be done once and those same instructions can be used to objects of that kind. We call such instructions a Class.
Classes and objects in an example scenario
Consider the example of writing an OOP program to calculate the average age of Adam, Beth, Charlie, and Daisy.
Instructions for creating objects Adam, Beth, Charlie, and Daisy will be very similar because they are all of the same kind: they all represent ‘persons’ with the same interface, the same kind of data (i.e. name, dateOfBirth, etc.), and the same kind of behavior (i.e. getAge(Date), getName(), etc.). Therefore, you can have a class called Person containing instructions on how to create Person objects and use that class to instantiate objects Adam, Beth, Charlie, and Daisy.
Similarly, you need classes AgeList, Calculator, and Main classes to instantiate one each of AgeList, Calculator, and Main objects.
Class
Objects
Person
objects representing Adam, Beth, Charlie, Daisy
AgeList
an object to represent the age list
Calculator
an object to do the calculations
Main
an object to represent you (i.e., the one who manages the whole operation)
Abstract class: A class declared as an abstract class cannot be instantiated, but it can be subclassed.
You can declare a class as abstract when a class is merely a representation of commonalities among its subclasses in which case it does not make sense to instantiate objects of that class.
The Animal class that exists as a generalization of its subclasses Cat, Dog, Horse, Tiger etc. can be declared as abstract because it does not make sense to instantiate an Animal object.
Abstract method: An abstract method is a method signature without a method implementation.
The move method of the Animal class is likely to be an abstract method as it is not possible to implement a move method at the Animal class level to fit all subclasses because each animal type can move in a different way.
A class that has an abstract method becomes an abstract class because the class definition is incomplete (due to the missing method body) and it is not possible to create objects using an incomplete class definition.
+
Abstract classes and methods
Abstract class: A class declared as an abstract class cannot be instantiated, but it can be subclassed.
You can declare a class as abstract when a class is merely a representation of commonalities among its subclasses in which case it does not make sense to instantiate objects of that class.
The Animal class that exists as a generalization of its subclasses Cat, Dog, Horse, Tiger etc. can be declared as abstract because it does not make sense to instantiate an Animal object.
Abstract method: An abstract method is a method signature without a method implementation.
The move method of the Animal class is likely to be an abstract method as it is not possible to implement a move method at the Animal class level to fit all subclasses because each animal type can move in a different way.
A class that has an abstract method becomes an abstract class because the class definition is incomplete (due to the missing method body) and it is not possible to create objects using an incomplete class definition.
An interface is a behavior specification i.e. a collection of . If a class , it means the class is able to support the behaviors specified by the said interface.
There are a number of situations in software engineering when it is important for disparate groups of programmers to agree to a "contract" that spells out how their software interacts. Each group should be able to write their code without any knowledge of how the other group's code is written. Generally speaking, interfaces are such contracts. --Oracle Docs on Java
Suppose SalariedStaff is an interface that contains two methods setSalary(int) and getSalary(). AcademicStaff can declare itself as implementing the SalariedStaff interface, which means the AcademicStaff class must implement all the methods specified by the SalariedStaff interface i.e., setSalary(int) and getSalary().
A class implementing an interface results in an is-a relationship, just like in class inheritance.
In the example above, AcademicStaffis aSalariedStaff. An AcademicStaff object can be used anywhere a SalariedStaff object is expected e.g. SalariedStaff ss = new AcademicStaff().
+
Interfaces
An interface is a behavior specification i.e. a collection of . If a class , it means the class is able to support the behaviors specified by the said interface.
There are a number of situations in software engineering when it is important for disparate groups of programmers to agree to a "contract" that spells out how their software interacts. Each group should be able to write their code without any knowledge of how the other group's code is written. Generally speaking, interfaces are such contracts. --Oracle Docs on Java
Suppose SalariedStaff is an interface that contains two methods setSalary(int) and getSalary(). AcademicStaff can declare itself as implementing the SalariedStaff interface, which means the AcademicStaff class must implement all the methods specified by the SalariedStaff interface i.e., setSalary(int) and getSalary().
A class implementing an interface results in an is-a relationship, just like in class inheritance.
In the example above, AcademicStaffis aSalariedStaff. An AcademicStaff object can be used anywhere a SalariedStaff object is expected e.g. SalariedStaff ss = new AcademicStaff().
Method overloading is when there are multiple methods with the same name but different type signatures. Overloading is used to indicate that multiple operations do similar things but take different parameters.
Type signature: The type signature of an operation is the type sequence of the parameters. The return type and parameter names are not part of the type signature. However, the parameter order is significant.
Example:
Method
Type Signature
int add(int X, int Y)
(int, int)
void add(int A, int B)
(int, int)
void m(int X, double Y)
(int, double)
void m(double X, int Y)
(double, int)
In the case below, the calculate method is overloaded because the two methods have the same name but different type signatures (String) and (int).
calculate(String): void
calculate(int): void
+
Overloading
Method overloading is when there are multiple methods with the same name but different type signatures. Overloading is used to indicate that multiple operations do similar things but take different parameters.
Type signature: The type signature of an operation is the type sequence of the parameters. The return type and parameter names are not part of the type signature. However, the parameter order is significant.
Example:
Method
Type Signature
int add(int X, int Y)
(int, int)
void add(int A, int B)
(int, int)
void m(int X, double Y)
(int, double)
void m(double X, int Y)
(double, int)
In the case below, the calculate method is overloaded because the two methods have the same name but different type signatures (String) and (int).
Method overriding is when a subclass changes the behavior inherited from the parent class by re-implementing the method. Overridden methods have the same name, the same type signature, and the same (or a subtype of the) return type.
Consider the following case of EvaluationReport class inheriting the Report class:
Report methods
EvaluationReport methods
Overrides?
print()
print()
Yes
write(String)
write(String)
Yes
read():String
read(int):String
No. Reason: the two methods have different signatures; This is a case of overloading (rather than overriding).
Exercises
+
Overriding
Method overriding is when a subclass changes the behavior inherited from the parent class by re-implementing the method. Overridden methods have the same name, the same type signature, and the same (or a subtype of the) return type.
Consider the following case of EvaluationReport class inheriting the Report class:
Report methods
EvaluationReport methods
Overrides?
print()
print()
Yes
write(String)
write(String)
Yes
read():String
read(int):String
No. Reason: the two methods have different signatures; This is a case of overloading (rather than overriding).
The OOP concept Inheritance allows you to define a new class based on an existing class.
For example, you can use inheritance to define an EvaluationReport class based on an existing Report class so that the EvaluationReport class does not have to duplicate data/behaviors that are already implemented in the Report class. The EvaluationReport can inherit the wordCount attribute and the print() method from the base classReport.
Other names for Base class: Parent class, Superclass
Other names for Derived class: Child class, Subclass, Extended class
A superclass is said to be more general than the subclass. Conversely, a subclass is said to be more specialized than the superclass.
Applying inheritance on a group of similar classes can result in the common parts among classes being extracted into more general classes.
Man and Woman behave the same way for certain things. However, the two classes cannot be simply replaced with a more general class Person because of the need to distinguish between Man and Woman for certain other things. A solution is to add the Person class as a superclass (to contain the code common to men and women) and let Man and Woman inherit from Person class.
Inheritance implies the derived class can be considered as a sub-type of the base class (and the base class is a super-type of the derived class), resulting in an is a relationship.
Inheritance does not necessarily mean a sub-type relationship exists. However, the two often go hand-in-hand. For simplicity, at this point let us assume inheritance implies a sub-type relationship.
To continue the previous example,
Womanis aPerson
Manis aPerson
Inheritance relationships through a chain of classes can result in inheritance hierarchies (aka inheritance trees).
Two inheritance hierarchies/trees are given below. Note that the triangle points to the parent class. Observe how the Parrotis aBird as well as it is anAnimal.
Multiple Inheritance is when a class inherits directly from multiple classes. Multiple inheritance among classes is allowed in some languages (e.g., Python, C++) but not in other languages (e.g., Java, C#).
The Honey class inherits from the Food class and the Medicine class because honey can be consumed as a food as well as a medicine (in some oriental medicine practices). Similarly, a Car is a Vehicle, an Asset and a Liability.
Exercises:
Which statements are correct?
+
What
The OOP concept Inheritance allows you to define a new class based on an existing class.
For example, you can use inheritance to define an EvaluationReport class based on an existing Report class so that the EvaluationReport class does not have to duplicate data/behaviors that are already implemented in the Report class. The EvaluationReport can inherit the wordCount attribute and the print() method from the base classReport.
Other names for Base class: Parent class, Superclass
Other names for Derived class: Child class, Subclass, Extended class
A superclass is said to be more general than the subclass. Conversely, a subclass is said to be more specialized than the superclass.
Applying inheritance on a group of similar classes can result in the common parts among classes being extracted into more general classes.
Man and Woman behave the same way for certain things. However, the two classes cannot be simply replaced with a more general class Person because of the need to distinguish between Man and Woman for certain other things. A solution is to add the Person class as a superclass (to contain the code common to men and women) and let Man and Woman inherit from Person class.
Inheritance implies the derived class can be considered as a sub-type of the base class (and the base class is a super-type of the derived class), resulting in an is a relationship.
Inheritance does not necessarily mean a sub-type relationship exists. However, the two often go hand-in-hand. For simplicity, at this point let us assume inheritance implies a sub-type relationship.
To continue the previous example,
Womanis aPerson
Manis aPerson
Inheritance relationships through a chain of classes can result in inheritance hierarchies (aka inheritance trees).
Two inheritance hierarchies/trees are given below. Note that the triangle points to the parent class. Observe how the Parrotis aBird as well as it is anAnimal.
Multiple Inheritance is when a class inherits directly from multiple classes. Multiple inheritance among classes is allowed in some languages (e.g., Python, C++) but not in other languages (e.g., Java, C#).
The Honey class inherits from the Food class and the Medicine class because honey can be consumed as a food as well as a medicine (in some oriental medicine practices). Similarly, a Car is a Vehicle, an Asset and a Liability.
Object-Oriented Programming (OOP) is a programming paradigm. A programming paradigm guides programmers to analyze programming problems, and structure programming solutions, in a specific way.
Programming languages have traditionally divided the world into two parts—data and operations on data. Data is static and immutable, except as the operations may change it. The procedures and functions that operate on data have no lasting state of their own; they’re useful only in their ability to affect data.
This division is, of course, grounded in the way computers work, so it’s not one that you can easily ignore or push aside. Like the equally pervasive distinctions between matter and energy and between nouns and verbs, it forms the background against which you work. At some point, all programmers—even object-oriented programmers—must lay out the data structures that their programs will use and define the functions that will act on the data.
With a procedural programming language like C, that’s about all there is to it. The language may offer various kinds of support for organizing data and functions, but it won’t divide the world any differently. Functions and data structures are the basic elements of design.
Object-oriented programming doesn’t so much dispute this view of the world as restructure it at a higher level. It groups operations and data into modular units called objects and lets you combine objects into structured networks to form a complete program. In an object-oriented programming language, objects and object interactions are the basic elements of design.
Some programming languages support multiple paradigms.
Java is primarily an OOP language but it supports limited forms of functional programming and it can be used to (although not recommended to) write procedural code. e.g. se-edu/addressbook-level1
JavaScript and Python support functional, procedural, and OOP programming.
Exercises:
Statements about OOP
Procedural vs OOP
+
Introduction
What
Can describe OOP at a higher level
Object-Oriented Programming (OOP) is a programming paradigm. A programming paradigm guides programmers to analyze programming problems, and structure programming solutions, in a specific way.
Programming languages have traditionally divided the world into two parts—data and operations on data. Data is static and immutable, except as the operations may change it. The procedures and functions that operate on data have no lasting state of their own; they’re useful only in their ability to affect data.
This division is, of course, grounded in the way computers work, so it’s not one that you can easily ignore or push aside. Like the equally pervasive distinctions between matter and energy and between nouns and verbs, it forms the background against which you work. At some point, all programmers—even object-oriented programmers—must lay out the data structures that their programs will use and define the functions that will act on the data.
With a procedural programming language like C, that’s about all there is to it. The language may offer various kinds of support for organizing data and functions, but it won’t divide the world any differently. Functions and data structures are the basic elements of design.
Object-oriented programming doesn’t so much dispute this view of the world as restructure it at a higher level. It groups operations and data into modular units called objects and lets you combine objects into structured networks to form a complete program. In an object-oriented programming language, objects and object interactions are the basic elements of design.
Some programming languages support multiple paradigms.
Java is primarily an OOP language but it supports limited forms of functional programming and it can be used to (although not recommended to) write procedural code. e.g. se-edu/addressbook-level1
JavaScript and Python support functional, procedural, and OOP programming.
Object-Oriented Programming (OOP) is a programming paradigm. A programming paradigm guides programmers to analyze programming problems, and structure programming solutions, in a specific way.
Programming languages have traditionally divided the world into two parts—data and operations on data. Data is static and immutable, except as the operations may change it. The procedures and functions that operate on data have no lasting state of their own; they’re useful only in their ability to affect data.
This division is, of course, grounded in the way computers work, so it’s not one that you can easily ignore or push aside. Like the equally pervasive distinctions between matter and energy and between nouns and verbs, it forms the background against which you work. At some point, all programmers—even object-oriented programmers—must lay out the data structures that their programs will use and define the functions that will act on the data.
With a procedural programming language like C, that’s about all there is to it. The language may offer various kinds of support for organizing data and functions, but it won’t divide the world any differently. Functions and data structures are the basic elements of design.
Object-oriented programming doesn’t so much dispute this view of the world as restructure it at a higher level. It groups operations and data into modular units called objects and lets you combine objects into structured networks to form a complete program. In an object-oriented programming language, objects and object interactions are the basic elements of design.
Some programming languages support multiple paradigms.
Java is primarily an OOP language but it supports limited forms of functional programming and it can be used to (although not recommended to) write procedural code. e.g. se-edu/addressbook-level1
JavaScript and Python support functional, procedural, and OOP programming.
Exercises:
Statements about OOP
Procedural vs OOP
+
What
Object-Oriented Programming (OOP) is a programming paradigm. A programming paradigm guides programmers to analyze programming problems, and structure programming solutions, in a specific way.
Programming languages have traditionally divided the world into two parts—data and operations on data. Data is static and immutable, except as the operations may change it. The procedures and functions that operate on data have no lasting state of their own; they’re useful only in their ability to affect data.
This division is, of course, grounded in the way computers work, so it’s not one that you can easily ignore or push aside. Like the equally pervasive distinctions between matter and energy and between nouns and verbs, it forms the background against which you work. At some point, all programmers—even object-oriented programmers—must lay out the data structures that their programs will use and define the functions that will act on the data.
With a procedural programming language like C, that’s about all there is to it. The language may offer various kinds of support for organizing data and functions, but it won’t divide the world any differently. Functions and data structures are the basic elements of design.
Object-oriented programming doesn’t so much dispute this view of the world as restructure it at a higher level. It groups operations and data into modular units called objects and lets you combine objects into structured networks to form a complete program. In an object-oriented programming language, objects and object interactions are the basic elements of design.
Some programming languages support multiple paradigms.
Java is primarily an OOP language but it supports limited forms of functional programming and it can be used to (although not recommended to) write procedural code. e.g. se-edu/addressbook-level1
JavaScript and Python support functional, procedural, and OOP programming.
What is the difference between a Class, an Abstract Class, and an Interface?
An interface is a behavior specification with no implementation.
A class is a behavior specification + implementation.
An abstract class is a behavior specification + a possibly incomplete implementation.
How does overriding differ from overloading?
Overloading is used to indicate that multiple operations do similar things but take different parameters. Overloaded methods have the same method name but different method signatures and possibly different return types.
Overriding is when a sub-class redefines an operation using the same method name and the same type signature. Overridden methods have the same name, same method signature, and same return type.
Review
Can combine some OOP concepts
...
Exercises:
In the context of OOP, what is the relationship between abstraction and encapsulation?
Which of these do not belong to the four main OO principles?
+
More
Miscellaneous
Can answer frequently asked OOP questions
What is the difference between a Class, an Abstract Class, and an Interface?
An interface is a behavior specification with no implementation.
A class is a behavior specification + implementation.
An abstract class is a behavior specification + a possibly incomplete implementation.
How does overriding differ from overloading?
Overloading is used to indicate that multiple operations do similar things but take different parameters. Overloaded methods have the same method name but different method signatures and possibly different return types.
Overriding is when a sub-class redefines an operation using the same method name and the same type signature. Overridden methods have the same name, same method signature, and same return type.
Review
Can combine some OOP concepts
...
Exercises:
In the context of OOP, what is the relationship between abstraction and encapsulation?
Which of these do not belong to the four main OO principles?
What is the difference between a Class, an Abstract Class, and an Interface?
An interface is a behavior specification with no implementation.
A class is a behavior specification + implementation.
An abstract class is a behavior specification + a possibly incomplete implementation.
How does overriding differ from overloading?
Overloading is used to indicate that multiple operations do similar things but take different parameters. Overloaded methods have the same method name but different method signatures and possibly different return types.
Overriding is when a sub-class redefines an operation using the same method name and the same type signature. Overridden methods have the same name, same method signature, and same return type.
+
Miscellaneous
What is the difference between a Class, an Abstract Class, and an Interface?
An interface is a behavior specification with no implementation.
A class is a behavior specification + implementation.
An abstract class is a behavior specification + a possibly incomplete implementation.
How does overriding differ from overloading?
Overloading is used to indicate that multiple operations do similar things but take different parameters. Overloaded methods have the same method name but different method signatures and possibly different return types.
Overriding is when a sub-class redefines an operation using the same method name and the same type signature. Overridden methods have the same name, same method signature, and same return type.
The concept of Objects in OOP is an abstraction mechanism because it allows us to abstract away the lower level details and work with bigger granularity entities i.e. ignore details of data formats and the method implementation details and work at the level of objects.
You can deal with a Person object that represents the person Adam and query the object for Adam's age instead of dealing with details such as Adam’s date of birth (DoB), in what format the DoB is stored, the algorithm used to calculate the age from the DoB, etc.
+
Objects as abstractions
The concept of Objects in OOP is an abstraction mechanism because it allows us to abstract away the lower level details and work with bigger granularity entities i.e. ignore details of data formats and the method implementation details and work at the level of objects.
You can deal with a Person object that represents the person Adam and query the object for Adam's age instead of dealing with details such as Adam’s date of birth (DoB), in what format the DoB is stored, the algorithm used to calculate the age from the DoB, etc.
Can explain how substitutability operation overriding, and dynamic binding relates to polymorphism
Paradigms → OOP → Polymorphism →
-
How
Three concepts combine to achieve polymorphism: substitutability, operation overriding, and dynamic binding.
Substitutability: Because of substitutability, you can write code that expects objects of a parent class and yet use that code with objects of child classes. That is how polymorphism is able to treat objects of different types as one type.
Overriding: To get polymorphic behavior from an operation, the operation in the superclass needs to be overridden in each of the subclasses. That is how overriding allows objects of different subclasses to display different behaviors in response to the same method call.
Dynamic binding: Calls to overridden methods are bound to the implementation of the actual object's class dynamically during the runtime. That is how the polymorphic code can call the method of the parent class and yet execute the implementation of the child class.
Exercises:
Concepts related to polymorphism
+
How
Three concepts combine to achieve polymorphism: substitutability, operation overriding, and dynamic binding.
Substitutability: Because of substitutability, you can write code that expects objects of a parent class and yet use that code with objects of child classes. That is how polymorphism is able to treat objects of different types as one type.
Overriding: To get polymorphic behavior from an operation, the operation in the superclass needs to be overridden in each of the subclasses. That is how overriding allows objects of different subclasses to display different behaviors in response to the same method call.
Dynamic binding: Calls to overridden methods are bound to the implementation of the actual object's class dynamically during the runtime. That is how the polymorphic code can call the method of the parent class and yet execute the implementation of the child class.
Polymorphism allows you to write code targeting superclass objects, use that code on subclass objects, and achieve possibly different results based on the actual class of the object.
Assume classes Cat and Dog are both subclasses of the Animal class. You can write code targeting Animal objects and use that code on Cat and Dog objects, achieving possibly different results based on whether it is a Cat object or a Dog object. Some examples:
Declare an array of type Animal and still be able to store Dog and Cat objects in it.
Define a method that takes an Animal object as a parameter and yet be able to pass Dog and Cat objects to it.
Call a method on a Dog or a Cat object as if it is an Animal object (i.e., without knowing whether it is a Dog object or a Cat object) and get a different response from it based on its actual class e.g., call the Animal class's method speak() on object a and get a "Meow" as the return value if a is a Cat object and "Woof" if it is a Dog object.
Polymorphism literally means "ability to take many forms".
How
Can explain how substitutability operation overriding, and dynamic binding relates to polymorphism
Three concepts combine to achieve polymorphism: substitutability, operation overriding, and dynamic binding.
Substitutability: Because of substitutability, you can write code that expects objects of a parent class and yet use that code with objects of child classes. That is how polymorphism is able to treat objects of different types as one type.
Overriding: To get polymorphic behavior from an operation, the operation in the superclass needs to be overridden in each of the subclasses. That is how overriding allows objects of different subclasses to display different behaviors in response to the same method call.
Dynamic binding: Calls to overridden methods are bound to the implementation of the actual object's class dynamically during the runtime. That is how the polymorphic code can call the method of the parent class and yet execute the implementation of the child class.
Polymorphism allows you to write code targeting superclass objects, use that code on subclass objects, and achieve possibly different results based on the actual class of the object.
Assume classes Cat and Dog are both subclasses of the Animal class. You can write code targeting Animal objects and use that code on Cat and Dog objects, achieving possibly different results based on whether it is a Cat object or a Dog object. Some examples:
Declare an array of type Animal and still be able to store Dog and Cat objects in it.
Define a method that takes an Animal object as a parameter and yet be able to pass Dog and Cat objects to it.
Call a method on a Dog or a Cat object as if it is an Animal object (i.e., without knowing whether it is a Dog object or a Cat object) and get a different response from it based on its actual class e.g., call the Animal class's method speak() on object a and get a "Meow" as the return value if a is a Cat object and "Woof" if it is a Dog object.
Polymorphism literally means "ability to take many forms".
How
Can explain how substitutability operation overriding, and dynamic binding relates to polymorphism
Three concepts combine to achieve polymorphism: substitutability, operation overriding, and dynamic binding.
Substitutability: Because of substitutability, you can write code that expects objects of a parent class and yet use that code with objects of child classes. That is how polymorphism is able to treat objects of different types as one type.
Overriding: To get polymorphic behavior from an operation, the operation in the superclass needs to be overridden in each of the subclasses. That is how overriding allows objects of different subclasses to display different behaviors in response to the same method call.
Dynamic binding: Calls to overridden methods are bound to the implementation of the actual object's class dynamically during the runtime. That is how the polymorphic code can call the method of the parent class and yet execute the implementation of the child class.
Polymorphism allows you to write code targeting superclass objects, use that code on subclass objects, and achieve possibly different results based on the actual class of the object.
Assume classes Cat and Dog are both subclasses of the Animal class. You can write code targeting Animal objects and use that code on Cat and Dog objects, achieving possibly different results based on whether it is a Cat object or a Dog object. Some examples:
Declare an array of type Animal and still be able to store Dog and Cat objects in it.
Define a method that takes an Animal object as a parameter and yet be able to pass Dog and Cat objects to it.
Call a method on a Dog or a Cat object as if it is an Animal object (i.e., without knowing whether it is a Dog object or a Cat object) and get a different response from it based on its actual class e.g., call the Animal class's method speak() on object a and get a "Meow" as the return value if a is a Cat object and "Woof" if it is a Dog object.
Polymorphism literally means "ability to take many forms".
Polymorphism allows you to write code targeting superclass objects, use that code on subclass objects, and achieve possibly different results based on the actual class of the object.
Assume classes Cat and Dog are both subclasses of the Animal class. You can write code targeting Animal objects and use that code on Cat and Dog objects, achieving possibly different results based on whether it is a Cat object or a Dog object. Some examples:
Declare an array of type Animal and still be able to store Dog and Cat objects in it.
Define a method that takes an Animal object as a parameter and yet be able to pass Dog and Cat objects to it.
Call a method on a Dog or a Cat object as if it is an Animal object (i.e., without knowing whether it is a Dog object or a Cat object) and get a different response from it based on its actual class e.g., call the Animal class's method speak() on object a and get a "Meow" as the return value if a is a Cat object and "Woof" if it is a Dog object.
Polymorphism literally means "ability to take many forms".
High-level modules should not depend on low-level modules. Both should depend on abstractions.
Abstractions should not depend on details. Details should depend on abstractions.
Example:
In design (a), the higher level class Payroll depends on the lower level class Employee, which is a violation of DIP. In design (b), both Payroll and Employee depend on the Payee interface (note that inheritance is a dependency).
Design (b) is more flexible (and less coupled) because now the Payroll class need not change when the Employee class changes.
Exercises:
Which of these statements is true about the Dependency Inversion Principle?
High-level modules should not depend on low-level modules. Both should depend on abstractions.
Abstractions should not depend on details. Details should depend on abstractions.
Example:
In design (a), the higher level class Payroll depends on the lower level class Employee, which is a violation of DIP. In design (b), both Payroll and Employee depend on the Payee interface (note that inheritance is a dependency).
Design (b) is more flexible (and less coupled) because now the Payroll class need not change when the Employee class changes.
Exercises:
Which of these statements is true about the Dependency Inversion Principle?
DRY (Don't Repeat Yourself) principle: Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. -- The Pragmatic Programmer, by Andy Hunt and Dave Thomas
This principle guards against the duplication of information.
A functionality being implemented twice is a violation of the DRY principle even if the two implementations are different.
The value of a system-wide timeout being defined in multiple places is a violation of DRY.
DRY (Don't Repeat Yourself) principle: Every piece of knowledge must have a single, unambiguous, authoritative representation within a system. -- The Pragmatic Programmer, by Andy Hunt and Dave Thomas
This principle guards against the duplication of information.
A functionality being implemented twice is a violation of the DRY principle even if the two implementations are different.
The value of a system-wide timeout being defined in multiple places is a violation of DRY.
Liskov substitution principle (LSP): Derived classes must be substitutable for their base classes. -- proposed by Barbara Liskov
LSP sounds the same as substitutability but it goes beyond substitutability; LSP implies that a subclass should not be more restrictive than the behavior specified by the superclass. As you know, Java has language support for substitutability. However, if LSP is not followed, substituting a subclass object for a superclass object can break the functionality of the code.
Suppose the Payroll class depends on the adjustMySalary(int percent) method of the Staff class. Furthermore, the Staff class states that the adjustMySalary method will work for all positive percent values. Both the Admin and Academic classes override the adjustMySalary method.
Now consider the following:
The Admin#adjustMySalary method works for both negative and positive percent values.
The Academic#adjustMySalary method works for percent values 1..100 only.
In the above scenario,
The Admin class follows LSP because it fulfills Payroll’s expectation of Staff objects (i.e. it works for all positive values). Substituting Admin objects for Staff objects will not break the Payroll class functionality.
The Academic class violates LSP because it will not work for percent values over 100 as expected by the Payroll class. Substituting Academic objects for Staff objects can potentially break the Payroll class functionality.
Liskov substitution principle (LSP): Derived classes must be substitutable for their base classes. -- proposed by Barbara Liskov
LSP sounds the same as substitutability but it goes beyond substitutability; LSP implies that a subclass should not be more restrictive than the behavior specified by the superclass. As you know, Java has language support for substitutability. However, if LSP is not followed, substituting a subclass object for a superclass object can break the functionality of the code.
Suppose the Payroll class depends on the adjustMySalary(int percent) method of the Staff class. Furthermore, the Staff class states that the adjustMySalary method will work for all positive percent values. Both the Admin and Academic classes override the adjustMySalary method.
Now consider the following:
The Admin#adjustMySalary method works for both negative and positive percent values.
The Academic#adjustMySalary method works for percent values 1..100 only.
In the above scenario,
The Admin class follows LSP because it fulfills Payroll’s expectation of Staff objects (i.e. it works for all positive values). Substituting Admin objects for Staff objects will not break the Payroll class functionality.
The Academic class violates LSP because it will not work for percent values over 100 as expected by the Payroll class. Substituting Academic objects for Staff objects can potentially break the Payroll class functionality.
Separation of concerns principle (SoC): To achieve better modularity, separate the code into distinct sections, such that each section addresses a separate concern. -- Proposed by Edsger W. Dijkstra
A concern in this context is a set of information that affects the code of a computer program.
Examples for concerns:
A specific feature, such as the code related to the add employee feature
A specific aspect, such as the code related to persistence or security
A specific entity, such as the code related to the Employee entity
Applying reduces functional overlaps among code sections and also limits the ripple effect when changes are introduced to a specific part of the system.
If the code related to persistence is separated from the code related to security, a change to how the data are persisted will not need changes to how the security is implemented.
This principle can be applied at the class level, as well as at higher levels.
The n-tier architecture utilizes this principle. Each layer in the architecture has a well-defined functionality that has no functional overlap with each other.
This principle should lead to higher cohesion and lower coupling.
Separation of concerns principle (SoC): To achieve better modularity, separate the code into distinct sections, such that each section addresses a separate concern. -- Proposed by Edsger W. Dijkstra
A concern in this context is a set of information that affects the code of a computer program.
Examples for concerns:
A specific feature, such as the code related to the add employee feature
A specific aspect, such as the code related to persistence or security
A specific entity, such as the code related to the Employee entity
Applying reduces functional overlaps among code sections and also limits the ripple effect when changes are introduced to a specific part of the system.
If the code related to persistence is separated from the code related to security, a change to how the data are persisted will not need changes to how the security is implemented.
This principle can be applied at the class level, as well as at higher levels.
The n-tier architecture utilizes this principle. Each layer in the architecture has a well-defined functionality that has no functional overlap with each other.
This principle should lead to higher cohesion and lower coupling.
Single responsibility principle (SRP): A class should have one, and only one, reason to change. -- Robert C. Martin
If a class has only one responsibility, it needs to change only when there is a change to that responsibility.
Consider a TextUi class that does parsing of the user commands as well as interacting with the user. That class needs to change when the formatting of the UI changes as well as when the syntax of the user command changes. Hence, such a class does not follow the SRP.
Gather together the things that change for the same reasons. Separate those things that change for different reasons. ―- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
YAGNI (You Aren't Gonna Need It!) Principle: Do not add code simply because ‘you might need it in the future’.
The principle says that some capability you presume your software needs in the future should not be built now because chances are "you aren't gonna need it". The rationale is that you do not have perfect information about the future and therefore some of the extra work you do to fulfill a potential future need might go to waste when some of your predictions fail to materialize.
Resources:
Yagni -- A detailed article explaining YAGNI, written by Martin Fowler.
YAGNI (You Aren't Gonna Need It!) Principle: Do not add code simply because ‘you might need it in the future’.
The principle says that some capability you presume your software needs in the future should not be built now because chances are "you aren't gonna need it". The rationale is that you do not have perfect information about the future and therefore some of the extra work you do to fulfill a potential future need might go to waste when some of your predictions fail to materialize.
Resources:
Yagni -- A detailed article explaining YAGNI, written by Martin Fowler.
The following description was adapted from the XP home page, emphasis added:
Extreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.
XP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.
XP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
XP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.
XP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.
Pair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP.
Scrum
Can explain scrum
This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:
Scrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:
The Scrum Master, who maintains the processes (typically in lieu of a project manager)
The Product Owner, who represents the stakeholders and the business
The Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.
A Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.
During each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.
In Scrum, on each day of a sprint, the team holds a daily scrum meeting called the "daily scrum.” Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.
...
During the daily scrum, each team member answers the following three questions:
What did you do yesterday?
What will you do today?
Are there any impediments in your way?
...
The daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting.
Intro to Scrum in Under 10 Minutes
Unified process
Can explain the Unified Process
The unified process is developed by the Three Amigos - Ivar Jacobson, Grady Booch and James Rumbaugh (the creators of UML).
The unified process consists of four phases: inception, elaboration, construction and transition. The main purpose of each phase can be summarized as follows:
Phase
Activities
Typical Artifacts
Inception
Understand the problem and requirements
Communicate with customer
Plan the development effort
Basic use case model
Rough project plan
Project vision and scope
Elaboration
Refine and expand requirements
Determine a high-level design e.g. system architecture
System architecture
Various design models
Prototype
Construction
Major implementation effort to support the use cases identified
Design models are refined and fleshed out
Testing of all levels are carried out
Multiple releases of the system
Test cases of all levels
System release
Transition
Ready the system for actual production use
Familiarize end users with the system
Final system release
Instruction manual
Given above is a visualization of a project done using the Unified process (source: Wikipedia). As the diagram shows, a phase can consist of several iterations. Each vertical column (labeled “I1” “E1”, “E2”, “C1”, etc.) represents a single iteration. Each of the iterations consists of a set of ‘workflows’ such as ‘Business modeling’, ‘Requirements’, ‘Analysis & Design’, etc. The shaded region indicates the amount of resources and effort spent on a particular workflow in a particular iteration.
Unified process is a flexible and customizable process model framework rather than a single fixed process. For example, the number of iterations in each phase, definition of workflows, and the intensity of a given workflow in a given iteration can be adjusted according to the nature of the project. Take the Construction Phase: to develop a simple system, one or two iterations would be sufficient. For a more complicated system, multiple iterations will be more helpful. Therefore, the diagram above simply records a particular application of the UP rather than prescribe how the UP is to be applied. However, this record can be refined and reused for similar future projects.
Exercises:
Statements about the unified process
+
Example process models
XP
Can explain XP
The following description was adapted from the XP home page, emphasis added:
Extreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.
XP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.
XP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
XP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.
XP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.
Pair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP.
Scrum
Can explain scrum
This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:
Scrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:
The Scrum Master, who maintains the processes (typically in lieu of a project manager)
The Product Owner, who represents the stakeholders and the business
The Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.
A Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.
During each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.
In Scrum, on each day of a sprint, the team holds a daily scrum meeting called the "daily scrum.” Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.
...
During the daily scrum, each team member answers the following three questions:
What did you do yesterday?
What will you do today?
Are there any impediments in your way?
...
The daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting.
Intro to Scrum in Under 10 Minutes
Unified process
Can explain the Unified Process
The unified process is developed by the Three Amigos - Ivar Jacobson, Grady Booch and James Rumbaugh (the creators of UML).
The unified process consists of four phases: inception, elaboration, construction and transition. The main purpose of each phase can be summarized as follows:
Phase
Activities
Typical Artifacts
Inception
Understand the problem and requirements
Communicate with customer
Plan the development effort
Basic use case model
Rough project plan
Project vision and scope
Elaboration
Refine and expand requirements
Determine a high-level design e.g. system architecture
System architecture
Various design models
Prototype
Construction
Major implementation effort to support the use cases identified
Design models are refined and fleshed out
Testing of all levels are carried out
Multiple releases of the system
Test cases of all levels
System release
Transition
Ready the system for actual production use
Familiarize end users with the system
Final system release
Instruction manual
Given above is a visualization of a project done using the Unified process (source: Wikipedia). As the diagram shows, a phase can consist of several iterations. Each vertical column (labeled “I1” “E1”, “E2”, “C1”, etc.) represents a single iteration. Each of the iterations consists of a set of ‘workflows’ such as ‘Business modeling’, ‘Requirements’, ‘Analysis & Design’, etc. The shaded region indicates the amount of resources and effort spent on a particular workflow in a particular iteration.
Unified process is a flexible and customizable process model framework rather than a single fixed process. For example, the number of iterations in each phase, definition of workflows, and the intensity of a given workflow in a given iteration can be adjusted according to the nature of the project. Take the Construction Phase: to develop a simple system, one or two iterations would be sufficient. For a more complicated system, multiple iterations will be more helpful. Therefore, the diagram above simply records a particular application of the UP rather than prescribe how the UP is to be applied. However, this record can be refined and reused for similar future projects.
This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:
Scrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:
The Scrum Master, who maintains the processes (typically in lieu of a project manager)
The Product Owner, who represents the stakeholders and the business
The Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.
A Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.
During each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.
In Scrum, on each day of a sprint, the team holds a daily scrum meeting called the "daily scrum.” Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.
...
During the daily scrum, each team member answers the following three questions:
What did you do yesterday?
What will you do today?
Are there any impediments in your way?
...
The daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting.
Intro to Scrum in Under 10 Minutes
+
Scrum
This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:
Scrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:
The Scrum Master, who maintains the processes (typically in lieu of a project manager)
The Product Owner, who represents the stakeholders and the business
The Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.
A Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.
During each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.
In Scrum, on each day of a sprint, the team holds a daily scrum meeting called the "daily scrum.” Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.
...
During the daily scrum, each team member answers the following three questions:
What did you do yesterday?
What will you do today?
Are there any impediments in your way?
...
The daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting.
The unified process is developed by the Three Amigos - Ivar Jacobson, Grady Booch and James Rumbaugh (the creators of UML).
The unified process consists of four phases: inception, elaboration, construction and transition. The main purpose of each phase can be summarized as follows:
Phase
Activities
Typical Artifacts
Inception
Understand the problem and requirements
Communicate with customer
Plan the development effort
Basic use case model
Rough project plan
Project vision and scope
Elaboration
Refine and expand requirements
Determine a high-level design e.g. system architecture
System architecture
Various design models
Prototype
Construction
Major implementation effort to support the use cases identified
Design models are refined and fleshed out
Testing of all levels are carried out
Multiple releases of the system
Test cases of all levels
System release
Transition
Ready the system for actual production use
Familiarize end users with the system
Final system release
Instruction manual
Given above is a visualization of a project done using the Unified process (source: Wikipedia). As the diagram shows, a phase can consist of several iterations. Each vertical column (labeled “I1” “E1”, “E2”, “C1”, etc.) represents a single iteration. Each of the iterations consists of a set of ‘workflows’ such as ‘Business modeling’, ‘Requirements’, ‘Analysis & Design’, etc. The shaded region indicates the amount of resources and effort spent on a particular workflow in a particular iteration.
Unified process is a flexible and customizable process model framework rather than a single fixed process. For example, the number of iterations in each phase, definition of workflows, and the intensity of a given workflow in a given iteration can be adjusted according to the nature of the project. Take the Construction Phase: to develop a simple system, one or two iterations would be sufficient. For a more complicated system, multiple iterations will be more helpful. Therefore, the diagram above simply records a particular application of the UP rather than prescribe how the UP is to be applied. However, this record can be refined and reused for similar future projects.
Exercises:
Statements about the unified process
+
Unified process
The unified process is developed by the Three Amigos - Ivar Jacobson, Grady Booch and James Rumbaugh (the creators of UML).
The unified process consists of four phases: inception, elaboration, construction and transition. The main purpose of each phase can be summarized as follows:
Phase
Activities
Typical Artifacts
Inception
Understand the problem and requirements
Communicate with customer
Plan the development effort
Basic use case model
Rough project plan
Project vision and scope
Elaboration
Refine and expand requirements
Determine a high-level design e.g. system architecture
System architecture
Various design models
Prototype
Construction
Major implementation effort to support the use cases identified
Design models are refined and fleshed out
Testing of all levels are carried out
Multiple releases of the system
Test cases of all levels
System release
Transition
Ready the system for actual production use
Familiarize end users with the system
Final system release
Instruction manual
Given above is a visualization of a project done using the Unified process (source: Wikipedia). As the diagram shows, a phase can consist of several iterations. Each vertical column (labeled “I1” “E1”, “E2”, “C1”, etc.) represents a single iteration. Each of the iterations consists of a set of ‘workflows’ such as ‘Business modeling’, ‘Requirements’, ‘Analysis & Design’, etc. The shaded region indicates the amount of resources and effort spent on a particular workflow in a particular iteration.
Unified process is a flexible and customizable process model framework rather than a single fixed process. For example, the number of iterations in each phase, definition of workflows, and the intensity of a given workflow in a given iteration can be adjusted according to the nature of the project. Take the Construction Phase: to develop a simple system, one or two iterations would be sufficient. For a more complicated system, multiple iterations will be more helpful. Therefore, the diagram above simply records a particular application of the UP rather than prescribe how the UP is to be applied. However, this record can be refined and reused for similar future projects.
The following description was adapted from the XP home page, emphasis added:
Extreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.
XP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.
XP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
XP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.
XP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.
Pair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP.
+
XP
The following description was adapted from the XP home page, emphasis added:
Extreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.
XP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.
XP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
XP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.
XP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.
Pair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP.
Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a 'roadmap' for the software developers to manage the development effort. The roadmap describes the aims of the development stages, the outcome of each stage, and the workflow i.e. the relationship between stages.
Sequential models
Can explain sequential process models
The sequential model, also called the waterfall model, views software development as a linear process, in which the project is seen as progressing through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).
When one stage of the process is completed, it produces some artifacts to be used in the next stage. For example, the requirements stage produces a comprehensive list of requirements, to be used in the design phase.
A strict sequential model project moves only in the forward direction i.e., each stage is completed before starting the next. For example, once the requirements stage is over, there is no provision for revising the requirements later.
This model can work well for a project that produces software to solve a well-understood problem, in which case the requirements can remain stable and the effort can be estimated accurately. Furthermore, as each stage has a well-defined outcome, it is easy to track the progress of the project because one can gauge the project progress by monitoring which stage the project is in.
However, real-world projects often tackle problems that are not well-understood at the beginning, making them unsuitable for this model. For example, target users of a software product may not be able to state their requirements accurately at the start of the project, if they have not used a similar product before.
Iterative models
Can explain iterative process models
The iterative model advocates producing the software by going through several iterations. Each of the iterations could potentially go through all the stages of the SDLC, from requirements gathering to deployment.
Each iteration produces a new version of the product, building upon the version produced in the previous iteration. Feedback from each iteration is factored into the subsequent iterations. For example, if an implementation task took longer than expected, the effort estimate for a similar tasks in future iterations can be adjusted accordingly. Similarly, if a feature introduced in the current iteration was not well-received by target users, it can be removed or tweaked in the next iteration.
The iterative model can be done in breadth-first or depth-first approach.
In the breadth-first approach, an iteration evolves all major components and all functionality areas in paralleli.e., most features and most will be updated in each iteration, producing a working product at the end of each iteration.
In the depth-first approach, an iteration focuses on fleshing out only some components or some functionality area. Accordingly, early depth-first iterations might not produce a working product.
Taking a Minesweeper game as an example,
breadth-first iterations will deliver a fully playable version early. These early versions may have primitive functionality, for example, a rudimentary text based UI, fixed board size, limited minefield layouts, etc. These functionalities (and corresponding components) will then be improved in later releases.
an early depth-first iteration could deliver the full user interface (UI) but with no game logic at all. Alternatively, an early iteration could focus on just the logic for generating initial layouts of the minefield. Neither will be a playable version of the game but both can be used to collect early feedback (about the UI, and the initial minefield layouts, respectively) which can then be used to guide later iterations.
A project can be done as a mixture of breadth-first and depth-first iterationsi.e., an iteration can contain some breadth-first work as well as some depth-first work, or, some iterations can be breadth-first while others are depth-first.
Agile models
Can explain agile process models
In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).
You are uncovering better ways of developing software by doing it and helping others do it.
Through this work you have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, you value the items on the left more. -- Extract from the Agile Manifesto
Subsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:
Requirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.
Instead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.
There is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.
There are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones.
Exercises:
Statements about agile processes
Example process models
XP
Can explain XP
The following description was adapted from the XP home page, emphasis added:
Extreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.
XP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.
XP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
XP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.
XP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.
Pair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP.
Scrum
Can explain scrum
This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:
Scrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:
The Scrum Master, who maintains the processes (typically in lieu of a project manager)
The Product Owner, who represents the stakeholders and the business
The Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.
A Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.
During each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.
In Scrum, on each day of a sprint, the team holds a daily scrum meeting called the "daily scrum.” Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.
...
During the daily scrum, each team member answers the following three questions:
What did you do yesterday?
What will you do today?
Are there any impediments in your way?
...
The daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting.
Intro to Scrum in Under 10 Minutes
Unified process
Can explain the Unified Process
The unified process is developed by the Three Amigos - Ivar Jacobson, Grady Booch and James Rumbaugh (the creators of UML).
The unified process consists of four phases: inception, elaboration, construction and transition. The main purpose of each phase can be summarized as follows:
Phase
Activities
Typical Artifacts
Inception
Understand the problem and requirements
Communicate with customer
Plan the development effort
Basic use case model
Rough project plan
Project vision and scope
Elaboration
Refine and expand requirements
Determine a high-level design e.g. system architecture
System architecture
Various design models
Prototype
Construction
Major implementation effort to support the use cases identified
Design models are refined and fleshed out
Testing of all levels are carried out
Multiple releases of the system
Test cases of all levels
System release
Transition
Ready the system for actual production use
Familiarize end users with the system
Final system release
Instruction manual
Given above is a visualization of a project done using the Unified process (source: Wikipedia). As the diagram shows, a phase can consist of several iterations. Each vertical column (labeled “I1” “E1”, “E2”, “C1”, etc.) represents a single iteration. Each of the iterations consists of a set of ‘workflows’ such as ‘Business modeling’, ‘Requirements’, ‘Analysis & Design’, etc. The shaded region indicates the amount of resources and effort spent on a particular workflow in a particular iteration.
Unified process is a flexible and customizable process model framework rather than a single fixed process. For example, the number of iterations in each phase, definition of workflows, and the intensity of a given workflow in a given iteration can be adjusted according to the nature of the project. Take the Construction Phase: to develop a simple system, one or two iterations would be sufficient. For a more complicated system, multiple iterations will be more helpful. Therefore, the diagram above simply records a particular application of the UP rather than prescribe how the UP is to be applied. However, this record can be refined and reused for similar future projects.
Exercises:
Statements about the unified process
More
CMMI
Can explain CMMI
CMMI (Capability Maturity Model Integration) is a process improvement approach defined by Software Engineering Institute at Carnegie Melon University. CMMI provides organizations with the essential elements of effective processes, which will improve their performance. -- adapted from http://www.sei.cmu.edu/cmmi/
CMMI defines five maturity levels for a process and provides criteria to determine if the process of an organization is at a certain maturity level. The diagram below [taken from Wikipedia] gives an overview of the five levels.
Summary
Recap
Can explain process models at a higher level
This section has some exercises that cover multiple topics related to SDLC process models.
Exercises:
Sequential vs iterative approach
Agile processes, Pair programming, Test-driven development
The two basic process models
Statements about sequential and iterative process models
Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a 'roadmap' for the software developers to manage the development effort. The roadmap describes the aims of the development stages, the outcome of each stage, and the workflow i.e. the relationship between stages.
Sequential models
Can explain sequential process models
The sequential model, also called the waterfall model, views software development as a linear process, in which the project is seen as progressing through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).
When one stage of the process is completed, it produces some artifacts to be used in the next stage. For example, the requirements stage produces a comprehensive list of requirements, to be used in the design phase.
A strict sequential model project moves only in the forward direction i.e., each stage is completed before starting the next. For example, once the requirements stage is over, there is no provision for revising the requirements later.
This model can work well for a project that produces software to solve a well-understood problem, in which case the requirements can remain stable and the effort can be estimated accurately. Furthermore, as each stage has a well-defined outcome, it is easy to track the progress of the project because one can gauge the project progress by monitoring which stage the project is in.
However, real-world projects often tackle problems that are not well-understood at the beginning, making them unsuitable for this model. For example, target users of a software product may not be able to state their requirements accurately at the start of the project, if they have not used a similar product before.
Iterative models
Can explain iterative process models
The iterative model advocates producing the software by going through several iterations. Each of the iterations could potentially go through all the stages of the SDLC, from requirements gathering to deployment.
Each iteration produces a new version of the product, building upon the version produced in the previous iteration. Feedback from each iteration is factored into the subsequent iterations. For example, if an implementation task took longer than expected, the effort estimate for a similar tasks in future iterations can be adjusted accordingly. Similarly, if a feature introduced in the current iteration was not well-received by target users, it can be removed or tweaked in the next iteration.
The iterative model can be done in breadth-first or depth-first approach.
In the breadth-first approach, an iteration evolves all major components and all functionality areas in paralleli.e., most features and most will be updated in each iteration, producing a working product at the end of each iteration.
In the depth-first approach, an iteration focuses on fleshing out only some components or some functionality area. Accordingly, early depth-first iterations might not produce a working product.
Taking a Minesweeper game as an example,
breadth-first iterations will deliver a fully playable version early. These early versions may have primitive functionality, for example, a rudimentary text based UI, fixed board size, limited minefield layouts, etc. These functionalities (and corresponding components) will then be improved in later releases.
an early depth-first iteration could deliver the full user interface (UI) but with no game logic at all. Alternatively, an early iteration could focus on just the logic for generating initial layouts of the minefield. Neither will be a playable version of the game but both can be used to collect early feedback (about the UI, and the initial minefield layouts, respectively) which can then be used to guide later iterations.
A project can be done as a mixture of breadth-first and depth-first iterationsi.e., an iteration can contain some breadth-first work as well as some depth-first work, or, some iterations can be breadth-first while others are depth-first.
Agile models
Can explain agile process models
In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).
You are uncovering better ways of developing software by doing it and helping others do it.
Through this work you have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, you value the items on the left more. -- Extract from the Agile Manifesto
Subsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:
Requirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.
Instead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.
There is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.
There are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones.
Exercises:
Statements about agile processes
Example process models
XP
Can explain XP
The following description was adapted from the XP home page, emphasis added:
Extreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.
XP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.
XP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
XP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.
XP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.
Pair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP.
Scrum
Can explain scrum
This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:
Scrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:
The Scrum Master, who maintains the processes (typically in lieu of a project manager)
The Product Owner, who represents the stakeholders and the business
The Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.
A Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.
During each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.
In Scrum, on each day of a sprint, the team holds a daily scrum meeting called the "daily scrum.” Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.
...
During the daily scrum, each team member answers the following three questions:
What did you do yesterday?
What will you do today?
Are there any impediments in your way?
...
The daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting.
Intro to Scrum in Under 10 Minutes
Unified process
Can explain the Unified Process
The unified process is developed by the Three Amigos - Ivar Jacobson, Grady Booch and James Rumbaugh (the creators of UML).
The unified process consists of four phases: inception, elaboration, construction and transition. The main purpose of each phase can be summarized as follows:
Phase
Activities
Typical Artifacts
Inception
Understand the problem and requirements
Communicate with customer
Plan the development effort
Basic use case model
Rough project plan
Project vision and scope
Elaboration
Refine and expand requirements
Determine a high-level design e.g. system architecture
System architecture
Various design models
Prototype
Construction
Major implementation effort to support the use cases identified
Design models are refined and fleshed out
Testing of all levels are carried out
Multiple releases of the system
Test cases of all levels
System release
Transition
Ready the system for actual production use
Familiarize end users with the system
Final system release
Instruction manual
Given above is a visualization of a project done using the Unified process (source: Wikipedia). As the diagram shows, a phase can consist of several iterations. Each vertical column (labeled “I1” “E1”, “E2”, “C1”, etc.) represents a single iteration. Each of the iterations consists of a set of ‘workflows’ such as ‘Business modeling’, ‘Requirements’, ‘Analysis & Design’, etc. The shaded region indicates the amount of resources and effort spent on a particular workflow in a particular iteration.
Unified process is a flexible and customizable process model framework rather than a single fixed process. For example, the number of iterations in each phase, definition of workflows, and the intensity of a given workflow in a given iteration can be adjusted according to the nature of the project. Take the Construction Phase: to develop a simple system, one or two iterations would be sufficient. For a more complicated system, multiple iterations will be more helpful. Therefore, the diagram above simply records a particular application of the UP rather than prescribe how the UP is to be applied. However, this record can be refined and reused for similar future projects.
Exercises:
Statements about the unified process
More
CMMI
Can explain CMMI
CMMI (Capability Maturity Model Integration) is a process improvement approach defined by Software Engineering Institute at Carnegie Melon University. CMMI provides organizations with the essential elements of effective processes, which will improve their performance. -- adapted from http://www.sei.cmu.edu/cmmi/
CMMI defines five maturity levels for a process and provides criteria to determine if the process of an organization is at a certain maturity level. The diagram below [taken from Wikipedia] gives an overview of the five levels.
Summary
Recap
Can explain process models at a higher level
This section has some exercises that cover multiple topics related to SDLC process models.
Exercises:
Sequential vs iterative approach
Agile processes, Pair programming, Test-driven development
The two basic process models
Statements about sequential and iterative process models
Project Management → SDLC Process Models → Introduction →
-
Agile models
In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).
You are uncovering better ways of developing software by doing it and helping others do it.
Through this work you have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, you value the items on the left more. -- Extract from the Agile Manifesto
Subsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:
Requirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.
Instead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.
There is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.
There are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones.
Exercises:
Statements about agile processes
+
Agile models
In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).
You are uncovering better ways of developing software by doing it and helping others do it.
Through this work you have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, you value the items on the left more. -- Extract from the Agile Manifesto
Subsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:
Requirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.
Instead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.
There is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.
There are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones.
Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a 'roadmap' for the software developers to manage the development effort. The roadmap describes the aims of the development stages, the outcome of each stage, and the workflow i.e. the relationship between stages.
Sequential models
Can explain sequential process models
The sequential model, also called the waterfall model, views software development as a linear process, in which the project is seen as progressing through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).
When one stage of the process is completed, it produces some artifacts to be used in the next stage. For example, the requirements stage produces a comprehensive list of requirements, to be used in the design phase.
A strict sequential model project moves only in the forward direction i.e., each stage is completed before starting the next. For example, once the requirements stage is over, there is no provision for revising the requirements later.
This model can work well for a project that produces software to solve a well-understood problem, in which case the requirements can remain stable and the effort can be estimated accurately. Furthermore, as each stage has a well-defined outcome, it is easy to track the progress of the project because one can gauge the project progress by monitoring which stage the project is in.
However, real-world projects often tackle problems that are not well-understood at the beginning, making them unsuitable for this model. For example, target users of a software product may not be able to state their requirements accurately at the start of the project, if they have not used a similar product before.
Iterative models
Can explain iterative process models
The iterative model advocates producing the software by going through several iterations. Each of the iterations could potentially go through all the stages of the SDLC, from requirements gathering to deployment.
Each iteration produces a new version of the product, building upon the version produced in the previous iteration. Feedback from each iteration is factored into the subsequent iterations. For example, if an implementation task took longer than expected, the effort estimate for a similar tasks in future iterations can be adjusted accordingly. Similarly, if a feature introduced in the current iteration was not well-received by target users, it can be removed or tweaked in the next iteration.
The iterative model can be done in breadth-first or depth-first approach.
In the breadth-first approach, an iteration evolves all major components and all functionality areas in paralleli.e., most features and most will be updated in each iteration, producing a working product at the end of each iteration.
In the depth-first approach, an iteration focuses on fleshing out only some components or some functionality area. Accordingly, early depth-first iterations might not produce a working product.
Taking a Minesweeper game as an example,
breadth-first iterations will deliver a fully playable version early. These early versions may have primitive functionality, for example, a rudimentary text based UI, fixed board size, limited minefield layouts, etc. These functionalities (and corresponding components) will then be improved in later releases.
an early depth-first iteration could deliver the full user interface (UI) but with no game logic at all. Alternatively, an early iteration could focus on just the logic for generating initial layouts of the minefield. Neither will be a playable version of the game but both can be used to collect early feedback (about the UI, and the initial minefield layouts, respectively) which can then be used to guide later iterations.
A project can be done as a mixture of breadth-first and depth-first iterationsi.e., an iteration can contain some breadth-first work as well as some depth-first work, or, some iterations can be breadth-first while others are depth-first.
Agile models
Can explain agile process models
In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).
You are uncovering better ways of developing software by doing it and helping others do it.
Through this work you have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, you value the items on the left more. -- Extract from the Agile Manifesto
Subsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:
Requirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.
Instead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.
There is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.
There are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones.
Exercises:
Statements about agile processes
+
Introduction
What
Can explain SDLC process models
Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a 'roadmap' for the software developers to manage the development effort. The roadmap describes the aims of the development stages, the outcome of each stage, and the workflow i.e. the relationship between stages.
Sequential models
Can explain sequential process models
The sequential model, also called the waterfall model, views software development as a linear process, in which the project is seen as progressing through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).
When one stage of the process is completed, it produces some artifacts to be used in the next stage. For example, the requirements stage produces a comprehensive list of requirements, to be used in the design phase.
A strict sequential model project moves only in the forward direction i.e., each stage is completed before starting the next. For example, once the requirements stage is over, there is no provision for revising the requirements later.
This model can work well for a project that produces software to solve a well-understood problem, in which case the requirements can remain stable and the effort can be estimated accurately. Furthermore, as each stage has a well-defined outcome, it is easy to track the progress of the project because one can gauge the project progress by monitoring which stage the project is in.
However, real-world projects often tackle problems that are not well-understood at the beginning, making them unsuitable for this model. For example, target users of a software product may not be able to state their requirements accurately at the start of the project, if they have not used a similar product before.
Iterative models
Can explain iterative process models
The iterative model advocates producing the software by going through several iterations. Each of the iterations could potentially go through all the stages of the SDLC, from requirements gathering to deployment.
Each iteration produces a new version of the product, building upon the version produced in the previous iteration. Feedback from each iteration is factored into the subsequent iterations. For example, if an implementation task took longer than expected, the effort estimate for a similar tasks in future iterations can be adjusted accordingly. Similarly, if a feature introduced in the current iteration was not well-received by target users, it can be removed or tweaked in the next iteration.
The iterative model can be done in breadth-first or depth-first approach.
In the breadth-first approach, an iteration evolves all major components and all functionality areas in paralleli.e., most features and most will be updated in each iteration, producing a working product at the end of each iteration.
In the depth-first approach, an iteration focuses on fleshing out only some components or some functionality area. Accordingly, early depth-first iterations might not produce a working product.
Taking a Minesweeper game as an example,
breadth-first iterations will deliver a fully playable version early. These early versions may have primitive functionality, for example, a rudimentary text based UI, fixed board size, limited minefield layouts, etc. These functionalities (and corresponding components) will then be improved in later releases.
an early depth-first iteration could deliver the full user interface (UI) but with no game logic at all. Alternatively, an early iteration could focus on just the logic for generating initial layouts of the minefield. Neither will be a playable version of the game but both can be used to collect early feedback (about the UI, and the initial minefield layouts, respectively) which can then be used to guide later iterations.
A project can be done as a mixture of breadth-first and depth-first iterationsi.e., an iteration can contain some breadth-first work as well as some depth-first work, or, some iterations can be breadth-first while others are depth-first.
Agile models
Can explain agile process models
In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).
You are uncovering better ways of developing software by doing it and helping others do it.
Through this work you have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, you value the items on the left more. -- Extract from the Agile Manifesto
Subsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:
Requirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.
Instead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.
There is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.
There are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones.
Project Management → SDLC Process Models → Introduction →
-
Iterative models
The iterative model advocates producing the software by going through several iterations. Each of the iterations could potentially go through all the stages of the SDLC, from requirements gathering to deployment.
Each iteration produces a new version of the product, building upon the version produced in the previous iteration. Feedback from each iteration is factored into the subsequent iterations. For example, if an implementation task took longer than expected, the effort estimate for a similar tasks in future iterations can be adjusted accordingly. Similarly, if a feature introduced in the current iteration was not well-received by target users, it can be removed or tweaked in the next iteration.
The iterative model can be done in breadth-first or depth-first approach.
In the breadth-first approach, an iteration evolves all major components and all functionality areas in paralleli.e., most features and most will be updated in each iteration, producing a working product at the end of each iteration.
In the depth-first approach, an iteration focuses on fleshing out only some components or some functionality area. Accordingly, early depth-first iterations might not produce a working product.
Taking a Minesweeper game as an example,
breadth-first iterations will deliver a fully playable version early. These early versions may have primitive functionality, for example, a rudimentary text based UI, fixed board size, limited minefield layouts, etc. These functionalities (and corresponding components) will then be improved in later releases.
an early depth-first iteration could deliver the full user interface (UI) but with no game logic at all. Alternatively, an early iteration could focus on just the logic for generating initial layouts of the minefield. Neither will be a playable version of the game but both can be used to collect early feedback (about the UI, and the initial minefield layouts, respectively) which can then be used to guide later iterations.
A project can be done as a mixture of breadth-first and depth-first iterationsi.e., an iteration can contain some breadth-first work as well as some depth-first work, or, some iterations can be breadth-first while others are depth-first.
+
Iterative models
The iterative model advocates producing the software by going through several iterations. Each of the iterations could potentially go through all the stages of the SDLC, from requirements gathering to deployment.
Each iteration produces a new version of the product, building upon the version produced in the previous iteration. Feedback from each iteration is factored into the subsequent iterations. For example, if an implementation task took longer than expected, the effort estimate for a similar tasks in future iterations can be adjusted accordingly. Similarly, if a feature introduced in the current iteration was not well-received by target users, it can be removed or tweaked in the next iteration.
The iterative model can be done in breadth-first or depth-first approach.
In the breadth-first approach, an iteration evolves all major components and all functionality areas in paralleli.e., most features and most will be updated in each iteration, producing a working product at the end of each iteration.
In the depth-first approach, an iteration focuses on fleshing out only some components or some functionality area. Accordingly, early depth-first iterations might not produce a working product.
Taking a Minesweeper game as an example,
breadth-first iterations will deliver a fully playable version early. These early versions may have primitive functionality, for example, a rudimentary text based UI, fixed board size, limited minefield layouts, etc. These functionalities (and corresponding components) will then be improved in later releases.
an early depth-first iteration could deliver the full user interface (UI) but with no game logic at all. Alternatively, an early iteration could focus on just the logic for generating initial layouts of the minefield. Neither will be a playable version of the game but both can be used to collect early feedback (about the UI, and the initial minefield layouts, respectively) which can then be used to guide later iterations.
A project can be done as a mixture of breadth-first and depth-first iterationsi.e., an iteration can contain some breadth-first work as well as some depth-first work, or, some iterations can be breadth-first while others are depth-first.
Project Management → SDLC Process Models → Introduction →
-
Sequential models
The sequential model, also called the waterfall model, views software development as a linear process, in which the project is seen as progressing through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).
When one stage of the process is completed, it produces some artifacts to be used in the next stage. For example, the requirements stage produces a comprehensive list of requirements, to be used in the design phase.
A strict sequential model project moves only in the forward direction i.e., each stage is completed before starting the next. For example, once the requirements stage is over, there is no provision for revising the requirements later.
This model can work well for a project that produces software to solve a well-understood problem, in which case the requirements can remain stable and the effort can be estimated accurately. Furthermore, as each stage has a well-defined outcome, it is easy to track the progress of the project because one can gauge the project progress by monitoring which stage the project is in.
However, real-world projects often tackle problems that are not well-understood at the beginning, making them unsuitable for this model. For example, target users of a software product may not be able to state their requirements accurately at the start of the project, if they have not used a similar product before.
+
Sequential models
The sequential model, also called the waterfall model, views software development as a linear process, in which the project is seen as progressing through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).
When one stage of the process is completed, it produces some artifacts to be used in the next stage. For example, the requirements stage produces a comprehensive list of requirements, to be used in the design phase.
A strict sequential model project moves only in the forward direction i.e., each stage is completed before starting the next. For example, once the requirements stage is over, there is no provision for revising the requirements later.
This model can work well for a project that produces software to solve a well-understood problem, in which case the requirements can remain stable and the effort can be estimated accurately. Furthermore, as each stage has a well-defined outcome, it is easy to track the progress of the project because one can gauge the project progress by monitoring which stage the project is in.
However, real-world projects often tackle problems that are not well-understood at the beginning, making them unsuitable for this model. For example, target users of a software product may not be able to state their requirements accurately at the start of the project, if they have not used a similar product before.
Project Management → SDLC Process Models → Introduction →
-
What
Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a 'roadmap' for the software developers to manage the development effort. The roadmap describes the aims of the development stages, the outcome of each stage, and the workflow i.e. the relationship between stages.
+
What
Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a 'roadmap' for the software developers to manage the development effort. The roadmap describes the aims of the development stages, the outcome of each stage, and the workflow i.e. the relationship between stages.
CMMI (Capability Maturity Model Integration) is a process improvement approach defined by Software Engineering Institute at Carnegie Melon University. CMMI provides organizations with the essential elements of effective processes, which will improve their performance. -- adapted from http://www.sei.cmu.edu/cmmi/
CMMI defines five maturity levels for a process and provides criteria to determine if the process of an organization is at a certain maturity level. The diagram below [taken from Wikipedia] gives an overview of the five levels.
+
CMMI
CMMI (Capability Maturity Model Integration) is a process improvement approach defined by Software Engineering Institute at Carnegie Melon University. CMMI provides organizations with the essential elements of effective processes, which will improve their performance. -- adapted from http://www.sei.cmu.edu/cmmi/
CMMI defines five maturity levels for a process and provides criteria to determine if the process of an organization is at a certain maturity level. The diagram below [taken from Wikipedia] gives an overview of the five levels.
CMMI (Capability Maturity Model Integration) is a process improvement approach defined by Software Engineering Institute at Carnegie Melon University. CMMI provides organizations with the essential elements of effective processes, which will improve their performance. -- adapted from http://www.sei.cmu.edu/cmmi/
CMMI defines five maturity levels for a process and provides criteria to determine if the process of an organization is at a certain maturity level. The diagram below [taken from Wikipedia] gives an overview of the five levels.
+
More
CMMI
Can explain CMMI
CMMI (Capability Maturity Model Integration) is a process improvement approach defined by Software Engineering Institute at Carnegie Melon University. CMMI provides organizations with the essential elements of effective processes, which will improve their performance. -- adapted from http://www.sei.cmu.edu/cmmi/
CMMI defines five maturity levels for a process and provides criteria to determine if the process of an organization is at a certain maturity level. The diagram below [taken from Wikipedia] gives an overview of the five levels.
A buffer is time set aside to absorb any unforeseen delays. It is very important to include buffers in a software project schedule because effort/time estimations for software development are notoriously hard. However, do not inflate task estimates to create hidden buffers; have explicit buffers instead. Reason: With explicit buffers, it is easier to detect incorrect effort estimates which can serve as feedback to improve future effort estimates.
+
Buffers
A buffer is time set aside to absorb any unforeseen delays. It is very important to include buffers in a software project schedule because effort/time estimations for software development are notoriously hard. However, do not inflate task estimates to create hidden buffers; have explicit buffers instead. Reason: With explicit buffers, it is easier to detect incorrect effort estimates which can serve as feedback to improve future effort estimates.
A Gantt chart is a 2-D bar-chart, drawn as time vs tasks (represented by horizontal bars).
A sample Gantt chart:
In a Gantt chart, a solid bar represents the main task, which is generally composed of a number of subtasks, shown as grey bars. The diamond shape indicates an important deadline/deliverable/milestone.
+
Gantt charts
A Gantt chart is a 2-D bar-chart, drawn as time vs tasks (represented by horizontal bars).
A sample Gantt chart:
In a Gantt chart, a solid bar represents the main task, which is generally composed of a number of subtasks, shown as grey bars. The diamond shape indicates an important deadline/deliverable/milestone.
Keeping track of project tasks (who is doing what, which tasks are ongoing, which tasks are done etc.) is an essential part of project management. In small projects, it may be possible to keep track of tasks using simple tools such as online spreadsheets or general-purpose/light-weight task tracking tools such as Trello. Bigger projects need more sophisticated task tracking tools.
Issue trackers (sometimes called bug trackers) are commonly used to track task assignment and progress. Most online project management software such as GitHub, SourceForge, and BitBucket come with an integrated issue tracker.
A screenshot from the Jira Issue tracker software (Jira is part of the BitBucket project management tool suite):
+
Issue trackers
Keeping track of project tasks (who is doing what, which tasks are ongoing, which tasks are done etc.) is an essential part of project management. In small projects, it may be possible to keep track of tasks using simple tools such as online spreadsheets or general-purpose/light-weight task tracking tools such as Trello. Bigger projects need more sophisticated task tracking tools.
Issue trackers (sometimes called bug trackers) are commonly used to track task assignment and progress. Most online project management software such as GitHub, SourceForge, and BitBucket come with an integrated issue tracker.
A screenshot from the Jira Issue tracker software (Jira is part of the BitBucket project management tool suite):
A milestone is the end of a stage which indicates significant progress. You should take into account dependencies and priorities when deciding on the features to be delivered at a certain milestone.
Each intermediate product release is a milestone.
In some projects, it is not practical to have a very detailed plan for the whole project due to the uncertainty and unavailability of required information. In such cases, you can use a high-level plan for the whole project and a detailed plan for the next few milestones.
Milestones for the Minesweeper project, iteration 1
Day
Milestones
Day 1
Architecture skeleton completed
Day 3
‘new game’ feature implemented
Day 4
‘new game’ feature tested
+
Milestones
A milestone is the end of a stage which indicates significant progress. You should take into account dependencies and priorities when deciding on the features to be delivered at a certain milestone.
Each intermediate product release is a milestone.
In some projects, it is not practical to have a very detailed plan for the whole project due to the uncertainty and unavailability of required information. In such cases, you can use a high-level plan for the whole project and a detailed plan for the next few milestones.
Milestones for the Minesweeper project, iteration 1
A PERT (Program Evaluation Review Technique) chart uses a graphical technique to show the order/sequence of tasks. It is based on the simple idea of drawing a directed graph in which:
Nodes or vertices capture the effort estimations of tasks, and
Arrows depict the precedence between tasks
An example PERT chart for a simple software project
md = man days
A PERT chart can help determine the following important information:
The order of tasks. In the example above, Final Testing cannot begin until all coding of individual subsystems have been completed.
Which tasks can be done concurrently. In the example above, the various subsystem designs can start independently once the High level design is completed.
The shortest possible completion time. In the example above, there is a path (indicated by the shaded boxes) from start to end that determines the shortest possible completion time.
The Critical Path. In the example above, the shortest possible path is also the critical path.
Critical path is the path in which any delay can directly affect the project duration. It is important to ensure tasks on the critical path are completed on time.
+
PERT charts
A PERT (Program Evaluation Review Technique) chart uses a graphical technique to show the order/sequence of tasks. It is based on the simple idea of drawing a directed graph in which:
Nodes or vertices capture the effort estimations of tasks, and
Arrows depict the precedence between tasks
An example PERT chart for a simple software project
md = man days
A PERT chart can help determine the following important information:
The order of tasks. In the example above, Final Testing cannot begin until all coding of individual subsystems have been completed.
Which tasks can be done concurrently. In the example above, the various subsystem designs can start independently once the High level design is completed.
The shortest possible completion time. In the example above, there is a path (indicated by the shaded boxes) from start to end that determines the shortest possible completion time.
The Critical Path. In the example above, the shortest possible path is also the critical path.
Critical path is the path in which any delay can directly affect the project duration. It is important to ensure tasks on the critical path are completed on time.
Code review is the systematic examination of code with the intention of finding where the code can be improved.
Reviews can be done in various forms. Some examples below:
Pull Request reviews
Project Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.
In pair programming
As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.
Formal inspections
Inspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:
the author - the creator of the artifact
the moderator - the planner and executor of the inspection meeting
the secretary - the recorder of the findings of the inspection
the inspector/reviewer - the one who inspects/reviews the artifact
Advantages of code review over testing:
It can detect functionality defects as well as other problems such as coding standard violations.
It can verify non-code artifacts and incomplete code.
It does not require test drivers or stubs.
Disadvantages:
It is a manual process and therefore, error prone.
Code review is the systematic examination of code with the intention of finding where the code can be improved.
Reviews can be done in various forms. Some examples below:
Pull Request reviews
Project Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.
In pair programming
As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.
Formal inspections
Inspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:
the author - the creator of the artifact
the moderator - the planner and executor of the inspection meeting
the secretary - the recorder of the findings of the inspection
the inspector/reviewer - the one who inspects/reviews the artifact
Advantages of code review over testing:
It can detect functionality defects as well as other problems such as coding standard violations.
It can verify non-code artifacts and incomplete code.
It does not require test drivers or stubs.
Disadvantages:
It is a manual process and therefore, error prone.
Code review is the systematic examination of code with the intention of finding where the code can be improved.
Reviews can be done in various forms. Some examples below:
Pull Request reviews
Project Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.
In pair programming
As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.
Formal inspections
Inspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:
the author - the creator of the artifact
the moderator - the planner and executor of the inspection meeting
the secretary - the recorder of the findings of the inspection
the inspector/reviewer - the one who inspects/reviews the artifact
Advantages of code review over testing:
It can detect functionality defects as well as other problems such as coding standard violations.
It can verify non-code artifacts and incomplete code.
It does not require test drivers or stubs.
Disadvantages:
It is a manual process and therefore, error prone.
Code review is the systematic examination of code with the intention of finding where the code can be improved.
Reviews can be done in various forms. Some examples below:
Pull Request reviews
Project Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.
In pair programming
As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.
Formal inspections
Inspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:
the author - the creator of the artifact
the moderator - the planner and executor of the inspection meeting
the secretary - the recorder of the findings of the inspection
the inspector/reviewer - the one who inspects/reviews the artifact
Advantages of code review over testing:
It can detect functionality defects as well as other problems such as coding standard violations.
It can verify non-code artifacts and incomplete code.
It does not require test drivers or stubs.
Disadvantages:
It is a manual process and therefore, error prone.
Formal verification uses mathematical techniques to prove the correctness of a program.
An introduction to Formal Methods
Advantages:
Formal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.
Disadvantages:
It only proves the compliance with the specification, but not the actual utility of the software.
It requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems.
Exercises:
Absence of errors
+
Formal verification
What
Can explain formal verification
Formal verification uses mathematical techniques to prove the correctness of a program.
An introduction to Formal Methods
Advantages:
Formal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.
Disadvantages:
It only proves the compliance with the specification, but not the actual utility of the software.
It requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems.
Formal verification uses mathematical techniques to prove the correctness of a program.
An introduction to Formal Methods
Advantages:
Formal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.
Disadvantages:
It only proves the compliance with the specification, but not the actual utility of the software.
It requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems.
Exercises:
Absence of errors
+
What
Formal verification uses mathematical techniques to prove the correctness of a program.
An introduction to Formal Methods
Advantages:
Formal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.
Disadvantages:
It only proves the compliance with the specification, but not the actual utility of the software.
It requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems.
Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.
While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.
Validation versus verification
Can explain validation and verification
Quality Assurance = Validation + Verification
QA involves checking two aspects:
Validation: are you building the right system i.e., are the requirements correct?
Verification: are you building the system right i.e., are the requirements implemented correctly?
Whether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too).
Exercises:
Statements about validation and verification
Code reviews
What
Can explain code reviews
Code review is the systematic examination of code with the intention of finding where the code can be improved.
Reviews can be done in various forms. Some examples below:
Pull Request reviews
Project Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.
In pair programming
As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.
Formal inspections
Inspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:
the author - the creator of the artifact
the moderator - the planner and executor of the inspection meeting
the secretary - the recorder of the findings of the inspection
the inspector/reviewer - the one who inspects/reviews the artifact
Advantages of code review over testing:
It can detect functionality defects as well as other problems such as coding standard violations.
It can verify non-code artifacts and incomplete code.
It does not require test drivers or stubs.
Disadvantages:
It is a manual process and therefore, error prone.
Static analysis: Static analysis is the analysis of code without actually executing the code.
Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.
The term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.
Higher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.
Linters are a subset of static analyzers that specifically aim to locate areas where the code can be made 'cleaner'.
Formal verification
What
Can explain formal verification
Formal verification uses mathematical techniques to prove the correctness of a program.
An introduction to Formal Methods
Advantages:
Formal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.
Disadvantages:
It only proves the compliance with the specification, but not the actual utility of the software.
It requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems.
Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.
While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.
Validation versus verification
Can explain validation and verification
Quality Assurance = Validation + Verification
QA involves checking two aspects:
Validation: are you building the right system i.e., are the requirements correct?
Verification: are you building the system right i.e., are the requirements implemented correctly?
Whether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too).
Exercises:
Statements about validation and verification
Code reviews
What
Can explain code reviews
Code review is the systematic examination of code with the intention of finding where the code can be improved.
Reviews can be done in various forms. Some examples below:
Pull Request reviews
Project Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.
In pair programming
As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.
Formal inspections
Inspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:
the author - the creator of the artifact
the moderator - the planner and executor of the inspection meeting
the secretary - the recorder of the findings of the inspection
the inspector/reviewer - the one who inspects/reviews the artifact
Advantages of code review over testing:
It can detect functionality defects as well as other problems such as coding standard violations.
It can verify non-code artifacts and incomplete code.
It does not require test drivers or stubs.
Disadvantages:
It is a manual process and therefore, error prone.
Static analysis: Static analysis is the analysis of code without actually executing the code.
Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.
The term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.
Higher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.
Linters are a subset of static analyzers that specifically aim to locate areas where the code can be made 'cleaner'.
Formal verification
What
Can explain formal verification
Formal verification uses mathematical techniques to prove the correctness of a program.
An introduction to Formal Methods
Advantages:
Formal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.
Disadvantages:
It only proves the compliance with the specification, but not the actual utility of the software.
It requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems.
Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.
While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.
Validation versus verification
Can explain validation and verification
Quality Assurance = Validation + Verification
QA involves checking two aspects:
Validation: are you building the right system i.e., are the requirements correct?
Verification: are you building the system right i.e., are the requirements implemented correctly?
Whether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too).
Exercises:
Statements about validation and verification
+
Introduction
What
Can explain software quality assurance
Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.
While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.
Validation versus verification
Can explain validation and verification
Quality Assurance = Validation + Verification
QA involves checking two aspects:
Validation: are you building the right system i.e., are the requirements correct?
Verification: are you building the system right i.e., are the requirements implemented correctly?
Whether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too).
Validation: are you building the right system i.e., are the requirements correct?
Verification: are you building the system right i.e., are the requirements implemented correctly?
Whether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too).
Exercises:
Statements about validation and verification
+
Validation versus verification
Quality Assurance = Validation + Verification
QA involves checking two aspects:
Validation: are you building the right system i.e., are the requirements correct?
Verification: are you building the system right i.e., are the requirements implemented correctly?
Whether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too).
Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.
While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.
+
What
Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.
While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.
Static analysis: Static analysis is the analysis of code without actually executing the code.
Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.
The term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.
Higher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.
Linters are a subset of static analyzers that specifically aim to locate areas where the code can be made 'cleaner'.
+
Static analysis
What
Can explain static analysis
Static analysis: Static analysis is the analysis of code without actually executing the code.
Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.
The term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.
Higher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.
Static analysis: Static analysis is the analysis of code without actually executing the code.
Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.
The term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.
Higher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.
Linters are a subset of static analyzers that specifically aim to locate areas where the code can be made 'cleaner'.
+
What
Static analysis: Static analysis is the analysis of code without actually executing the code.
Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.
The term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.
Higher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.
A software requirement specifies a need to be fulfilled by the software product.
A software project may be,
a brownfield project i.e., develop a product to replace/update an existing software product
a greenfield project i.e., develop a totally new system from scratch
In either case, requirements need to be gathered, analyzed, specified, and managed.
Requirements come from stakeholders.
Stakeholder: An individual or an organization that is involved or potentially affected by the software project. e.g. users, sponsors, developers, interest groups, government agencies, etc.
Identifying requirements is often not easy. For example, stakeholders may not be aware of their precise needs, may not know how to communicate their requirements correctly, may not be willing to spend effort in identifying requirements, etc.
Exercises:
Stakeholders of an LMS
Non-functional requirements
Can explain non-functional requirements
Requirements can be divided into two in the following way:
Functional requirements specify what the system should do.
Non-functional requirements specify the constraints under which the system is developed and operated.
Some examples of non-functional requirement categories:
Data requirements e.g. size, , etc.,
Environment requirements e.g. technical environment in which the system would operate in or needs to be compatible with.
Accessibility, Capacity, Compliance with regulations, Documentation, Disaster recovery, Efficiency, Extensibility, Fault tolerance, Interoperability, Maintainability, Privacy, Portability, Quality, Reliability, Response time, Robustness, Scalability, Security, Stability, Testability, and more ...
Some concrete examples of NFRs
Business/domain rules: e.g. the size of the minefield cannot be smaller than five.
Constraints: e.g. the system should be backward compatible with data produced by earlier versions of the system; system testers are available only during the last month of the project; the total project cost should not exceed $1.5 million.
Technical requirements: e.g. the system should work on both 32-bit and 64-bit environments.
Performance requirements: e.g. the system should respond within two seconds.
Quality requirements: e.g. the system should be usable by a novice who has never carried out an online purchase.
Process requirements: e.g. the project is expected to adhere to a schedule that delivers a feature set every one month.
Notes about project scope: e.g. the product is not required to handle the printing of reports.
Any other noteworthy points: e.g. the game should not use images deemed offensive to those injured in real mine clearing activities.
You may have to spend an extra effort in digging NFRs out as early as possible because,
NFRs are easier to misse.g., stakeholders tend to think of functional requirements first
sometimes NFRs are critical to the success of the software.E.g. A web application that is too slow or that has low security is unlikely to succeed even if it has all the right functionality.
Exercises:
TEAMMATES NFRs
Prioritizing requirements
Can explain prioritizing requirements
Requirements can be prioritized based on the importance and urgency, while keeping in mind the constraints of schedule, budget, staff resources, quality goals, and other constraints.
A common approach is to group requirements into priority categories. Note that all such scales are subjective, and stakeholders define the meaning of each level in the scale for the project at hand.
An example scheme for categorizing requirements:
Essential: The product must have this requirement fulfilled or else it does not get user acceptance.
Typical: Most similar systems have this feature although the product can survive without it.
Novel: New features that could differentiate this product from the rest.
Other schemes:
High, Medium, Low
Must-have, Nice-to-have, Unlikely-to-have
Level 0, Level 1, Level 2, ...
Some requirements can be discarded if they are considered ‘out of ’.
The requirement given below is for a Calendar application. Stakeholders of the software (e.g. product designers) might decide the following requirement is not in the scope of the software.
The software records the actual time taken by each task and show the difference between the actual and scheduled time for the task.
Quality of requirements
Can explain quality of requirements
Here are some characteristics of well-defined requirements [📖 zielczynski]:
Unambiguous
Testable (verifiable)
Clear (concise, terse, simple, precise)
Correct
Understandable
Feasible (realistic, possible)
Independent
Necessary
Implementation-free (i.e. abstract)
Besides these criteria for individual requirements, the set of requirements as a whole should be
A software requirement specifies a need to be fulfilled by the software product.
A software project may be,
a brownfield project i.e., develop a product to replace/update an existing software product
a greenfield project i.e., develop a totally new system from scratch
In either case, requirements need to be gathered, analyzed, specified, and managed.
Requirements come from stakeholders.
Stakeholder: An individual or an organization that is involved or potentially affected by the software project. e.g. users, sponsors, developers, interest groups, government agencies, etc.
Identifying requirements is often not easy. For example, stakeholders may not be aware of their precise needs, may not know how to communicate their requirements correctly, may not be willing to spend effort in identifying requirements, etc.
Exercises:
Stakeholders of an LMS
Non-functional requirements
Can explain non-functional requirements
Requirements can be divided into two in the following way:
Functional requirements specify what the system should do.
Non-functional requirements specify the constraints under which the system is developed and operated.
Some examples of non-functional requirement categories:
Data requirements e.g. size, , etc.,
Environment requirements e.g. technical environment in which the system would operate in or needs to be compatible with.
Accessibility, Capacity, Compliance with regulations, Documentation, Disaster recovery, Efficiency, Extensibility, Fault tolerance, Interoperability, Maintainability, Privacy, Portability, Quality, Reliability, Response time, Robustness, Scalability, Security, Stability, Testability, and more ...
Some concrete examples of NFRs
Business/domain rules: e.g. the size of the minefield cannot be smaller than five.
Constraints: e.g. the system should be backward compatible with data produced by earlier versions of the system; system testers are available only during the last month of the project; the total project cost should not exceed $1.5 million.
Technical requirements: e.g. the system should work on both 32-bit and 64-bit environments.
Performance requirements: e.g. the system should respond within two seconds.
Quality requirements: e.g. the system should be usable by a novice who has never carried out an online purchase.
Process requirements: e.g. the project is expected to adhere to a schedule that delivers a feature set every one month.
Notes about project scope: e.g. the product is not required to handle the printing of reports.
Any other noteworthy points: e.g. the game should not use images deemed offensive to those injured in real mine clearing activities.
You may have to spend an extra effort in digging NFRs out as early as possible because,
NFRs are easier to misse.g., stakeholders tend to think of functional requirements first
sometimes NFRs are critical to the success of the software.E.g. A web application that is too slow or that has low security is unlikely to succeed even if it has all the right functionality.
Exercises:
TEAMMATES NFRs
Prioritizing requirements
Can explain prioritizing requirements
Requirements can be prioritized based on the importance and urgency, while keeping in mind the constraints of schedule, budget, staff resources, quality goals, and other constraints.
A common approach is to group requirements into priority categories. Note that all such scales are subjective, and stakeholders define the meaning of each level in the scale for the project at hand.
An example scheme for categorizing requirements:
Essential: The product must have this requirement fulfilled or else it does not get user acceptance.
Typical: Most similar systems have this feature although the product can survive without it.
Novel: New features that could differentiate this product from the rest.
Other schemes:
High, Medium, Low
Must-have, Nice-to-have, Unlikely-to-have
Level 0, Level 1, Level 2, ...
Some requirements can be discarded if they are considered ‘out of ’.
The requirement given below is for a Calendar application. Stakeholders of the software (e.g. product designers) might decide the following requirement is not in the scope of the software.
The software records the actual time taken by each task and show the difference between the actual and scheduled time for the task.
Quality of requirements
Can explain quality of requirements
Here are some characteristics of well-defined requirements [📖 zielczynski]:
Unambiguous
Testable (verifiable)
Clear (concise, terse, simple, precise)
Correct
Understandable
Feasible (realistic, possible)
Independent
Necessary
Implementation-free (i.e. abstract)
Besides these criteria for individual requirements, the set of requirements as a whole should be
A software requirement specifies a need to be fulfilled by the software product.
A software project may be,
a brownfield project i.e., develop a product to replace/update an existing software product
a greenfield project i.e., develop a totally new system from scratch
In either case, requirements need to be gathered, analyzed, specified, and managed.
Requirements come from stakeholders.
Stakeholder: An individual or an organization that is involved or potentially affected by the software project. e.g. users, sponsors, developers, interest groups, government agencies, etc.
Identifying requirements is often not easy. For example, stakeholders may not be aware of their precise needs, may not know how to communicate their requirements correctly, may not be willing to spend effort in identifying requirements, etc.
Exercises:
Stakeholders of an LMS
+
Introduction
A software requirement specifies a need to be fulfilled by the software product.
A software project may be,
a brownfield project i.e., develop a product to replace/update an existing software product
a greenfield project i.e., develop a totally new system from scratch
In either case, requirements need to be gathered, analyzed, specified, and managed.
Requirements come from stakeholders.
Stakeholder: An individual or an organization that is involved or potentially affected by the software project. e.g. users, sponsors, developers, interest groups, government agencies, etc.
Identifying requirements is often not easy. For example, stakeholders may not be aware of their precise needs, may not know how to communicate their requirements correctly, may not be willing to spend effort in identifying requirements, etc.
Requirements can be divided into two in the following way:
Functional requirements specify what the system should do.
Non-functional requirements specify the constraints under which the system is developed and operated.
Some examples of non-functional requirement categories:
Data requirements e.g. size, , etc.,
Environment requirements e.g. technical environment in which the system would operate in or needs to be compatible with.
Accessibility, Capacity, Compliance with regulations, Documentation, Disaster recovery, Efficiency, Extensibility, Fault tolerance, Interoperability, Maintainability, Privacy, Portability, Quality, Reliability, Response time, Robustness, Scalability, Security, Stability, Testability, and more ...
Some concrete examples of NFRs
Business/domain rules: e.g. the size of the minefield cannot be smaller than five.
Constraints: e.g. the system should be backward compatible with data produced by earlier versions of the system; system testers are available only during the last month of the project; the total project cost should not exceed $1.5 million.
Technical requirements: e.g. the system should work on both 32-bit and 64-bit environments.
Performance requirements: e.g. the system should respond within two seconds.
Quality requirements: e.g. the system should be usable by a novice who has never carried out an online purchase.
Process requirements: e.g. the project is expected to adhere to a schedule that delivers a feature set every one month.
Notes about project scope: e.g. the product is not required to handle the printing of reports.
Any other noteworthy points: e.g. the game should not use images deemed offensive to those injured in real mine clearing activities.
You may have to spend an extra effort in digging NFRs out as early as possible because,
NFRs are easier to misse.g., stakeholders tend to think of functional requirements first
sometimes NFRs are critical to the success of the software.E.g. A web application that is too slow or that has low security is unlikely to succeed even if it has all the right functionality.
Exercises:
TEAMMATES NFRs
+
Non-functional requirements
Requirements can be divided into two in the following way:
Functional requirements specify what the system should do.
Non-functional requirements specify the constraints under which the system is developed and operated.
Some examples of non-functional requirement categories:
Data requirements e.g. size, , etc.,
Environment requirements e.g. technical environment in which the system would operate in or needs to be compatible with.
Accessibility, Capacity, Compliance with regulations, Documentation, Disaster recovery, Efficiency, Extensibility, Fault tolerance, Interoperability, Maintainability, Privacy, Portability, Quality, Reliability, Response time, Robustness, Scalability, Security, Stability, Testability, and more ...
Some concrete examples of NFRs
Business/domain rules: e.g. the size of the minefield cannot be smaller than five.
Constraints: e.g. the system should be backward compatible with data produced by earlier versions of the system; system testers are available only during the last month of the project; the total project cost should not exceed $1.5 million.
Technical requirements: e.g. the system should work on both 32-bit and 64-bit environments.
Performance requirements: e.g. the system should respond within two seconds.
Quality requirements: e.g. the system should be usable by a novice who has never carried out an online purchase.
Process requirements: e.g. the project is expected to adhere to a schedule that delivers a feature set every one month.
Notes about project scope: e.g. the product is not required to handle the printing of reports.
Any other noteworthy points: e.g. the game should not use images deemed offensive to those injured in real mine clearing activities.
You may have to spend an extra effort in digging NFRs out as early as possible because,
NFRs are easier to misse.g., stakeholders tend to think of functional requirements first
sometimes NFRs are critical to the success of the software.E.g. A web application that is too slow or that has low security is unlikely to succeed even if it has all the right functionality.
Requirements can be prioritized based on the importance and urgency, while keeping in mind the constraints of schedule, budget, staff resources, quality goals, and other constraints.
A common approach is to group requirements into priority categories. Note that all such scales are subjective, and stakeholders define the meaning of each level in the scale for the project at hand.
An example scheme for categorizing requirements:
Essential: The product must have this requirement fulfilled or else it does not get user acceptance.
Typical: Most similar systems have this feature although the product can survive without it.
Novel: New features that could differentiate this product from the rest.
Other schemes:
High, Medium, Low
Must-have, Nice-to-have, Unlikely-to-have
Level 0, Level 1, Level 2, ...
Some requirements can be discarded if they are considered ‘out of ’.
The requirement given below is for a Calendar application. Stakeholders of the software (e.g. product designers) might decide the following requirement is not in the scope of the software.
The software records the actual time taken by each task and show the difference between the actual and scheduled time for the task.
+
Prioritizing requirements
Requirements can be prioritized based on the importance and urgency, while keeping in mind the constraints of schedule, budget, staff resources, quality goals, and other constraints.
A common approach is to group requirements into priority categories. Note that all such scales are subjective, and stakeholders define the meaning of each level in the scale for the project at hand.
An example scheme for categorizing requirements:
Essential: The product must have this requirement fulfilled or else it does not get user acceptance.
Typical: Most similar systems have this feature although the product can survive without it.
Novel: New features that could differentiate this product from the rest.
Other schemes:
High, Medium, Low
Must-have, Nice-to-have, Unlikely-to-have
Level 0, Level 1, Level 2, ...
Some requirements can be discarded if they are considered ‘out of ’.
The requirement given below is for a Calendar application. Stakeholders of the software (e.g. product designers) might decide the following requirement is not in the scope of the software.
The software records the actual time taken by each task and show the difference between the actual and scheduled time for the task.
An API should be well-designed (i.e. should cater for the needs of its users) and well-documented.
When you write software consisting of multiple components, you need to define the API of each component.
One approach is to let the API emerge and evolve over time as you write code.
Another approach is to define the API up-front. Doing so allows us to develop the components in parallel.
You can use UML sequence diagrams to analyze the required interactions between components in order to discover the required API. Given below is an example.
Example:
As you analyze the interactions between components using sequence diagrams, you discover the API of those components. For example, the diagram above tells us that the MSLogic component API should have the methods:
new()
getWidth:int
getHeight():int
getRemainingMineCount():int
More details can be included to increase the precision of the method definitions before coding. Such precision is important to avoid misunderstandings between the developer of the class and developers of other classes that interact with the class.
Operation: newGame(): void
Description: Generates a new WxH minefield with M mines. Any existing minefield will be overwritten.
Preconditions: None
Postconditions: A new minefield is created. Game state is READY.
Preconditions are the conditions that must be true before calling this operation. Postconditions describe the system after the operation is complete. Note that postconditions do not say what happens during the operation. Here is another example:
Operation: clearCellAt(int x, int y): void
Description: Records the cell at x, y as cleared.
Parameters: x, y coordinates of the cell
Preconditions: game state is READY or IN_PLAY. x and y are in 0..(H-1) and 0..(W-1), respectively.
Postconditions: Cell at x, y changes state to ZERO, ONE, TWO, THREE, …, EIGHT, or INCORRECTLY_CLEARED. Game state changes to IN_PLAY, WON or LOST as appropriate.
+
Designing APIs
An API should be well-designed (i.e. should cater for the needs of its users) and well-documented.
When you write software consisting of multiple components, you need to define the API of each component.
One approach is to let the API emerge and evolve over time as you write code.
Another approach is to define the API up-front. Doing so allows us to develop the components in parallel.
You can use UML sequence diagrams to analyze the required interactions between components in order to discover the required API. Given below is an example.
Example:
As you analyze the interactions between components using sequence diagrams, you discover the API of those components. For example, the diagram above tells us that the MSLogic component API should have the methods:
new()
getWidth:int
getHeight():int
getRemainingMineCount():int
More details can be included to increase the precision of the method definitions before coding. Such precision is important to avoid misunderstandings between the developer of the class and developers of other classes that interact with the class.
Operation: newGame(): void
Description: Generates a new WxH minefield with M mines. Any existing minefield will be overwritten.
Preconditions: None
Postconditions: A new minefield is created. Game state is READY.
Preconditions are the conditions that must be true before calling this operation. Postconditions describe the system after the operation is complete. Note that postconditions do not say what happens during the operation. Here is another example:
Operation: clearCellAt(int x, int y): void
Description: Records the cell at x, y as cleared.
Parameters: x, y coordinates of the cell
Preconditions: game state is READY or IN_PLAY. x and y are in 0..(H-1) and 0..(W-1), respectively.
Postconditions: Cell at x, y changes state to ZERO, ONE, TWO, THREE, …, EIGHT, or INCORRECTLY_CLEARED. Game state changes to IN_PLAY, WON or LOST as appropriate.
An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.
The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.
When developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.
Exercises:
Statements about APIs
True or False?
Designing APIs
Can design reasonable quality APIs
An API should be well-designed (i.e. should cater for the needs of its users) and well-documented.
When you write software consisting of multiple components, you need to define the API of each component.
One approach is to let the API emerge and evolve over time as you write code.
Another approach is to define the API up-front. Doing so allows us to develop the components in parallel.
You can use UML sequence diagrams to analyze the required interactions between components in order to discover the required API. Given below is an example.
Example:
As you analyze the interactions between components using sequence diagrams, you discover the API of those components. For example, the diagram above tells us that the MSLogic component API should have the methods:
new()
getWidth:int
getHeight():int
getRemainingMineCount():int
More details can be included to increase the precision of the method definitions before coding. Such precision is important to avoid misunderstandings between the developer of the class and developers of other classes that interact with the class.
Operation: newGame(): void
Description: Generates a new WxH minefield with M mines. Any existing minefield will be overwritten.
Preconditions: None
Postconditions: A new minefield is created. Game state is READY.
Preconditions are the conditions that must be true before calling this operation. Postconditions describe the system after the operation is complete. Note that postconditions do not say what happens during the operation. Here is another example:
Operation: clearCellAt(int x, int y): void
Description: Records the cell at x, y as cleared.
Parameters: x, y coordinates of the cell
Preconditions: game state is READY or IN_PLAY. x and y are in 0..(H-1) and 0..(W-1), respectively.
Postconditions: Cell at x, y changes state to ZERO, ONE, TWO, THREE, …, EIGHT, or INCORRECTLY_CLEARED. Game state changes to IN_PLAY, WON or LOST as appropriate.
+
APIs
What
Can explain APIs
An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.
The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.
When developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.
Exercises:
Statements about APIs
True or False?
Designing APIs
Can design reasonable quality APIs
An API should be well-designed (i.e. should cater for the needs of its users) and well-documented.
When you write software consisting of multiple components, you need to define the API of each component.
One approach is to let the API emerge and evolve over time as you write code.
Another approach is to define the API up-front. Doing so allows us to develop the components in parallel.
You can use UML sequence diagrams to analyze the required interactions between components in order to discover the required API. Given below is an example.
Example:
As you analyze the interactions between components using sequence diagrams, you discover the API of those components. For example, the diagram above tells us that the MSLogic component API should have the methods:
new()
getWidth:int
getHeight():int
getRemainingMineCount():int
More details can be included to increase the precision of the method definitions before coding. Such precision is important to avoid misunderstandings between the developer of the class and developers of other classes that interact with the class.
Operation: newGame(): void
Description: Generates a new WxH minefield with M mines. Any existing minefield will be overwritten.
Preconditions: None
Postconditions: A new minefield is created. Game state is READY.
Preconditions are the conditions that must be true before calling this operation. Postconditions describe the system after the operation is complete. Note that postconditions do not say what happens during the operation. Here is another example:
Operation: clearCellAt(int x, int y): void
Description: Records the cell at x, y as cleared.
Parameters: x, y coordinates of the cell
Preconditions: game state is READY or IN_PLAY. x and y are in 0..(H-1) and 0..(W-1), respectively.
Postconditions: Cell at x, y changes state to ZERO, ONE, TWO, THREE, …, EIGHT, or INCORRECTLY_CLEARED. Game state changes to IN_PLAY, WON or LOST as appropriate.
An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.
The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.
When developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.
Exercises:
Statements about APIs
True or False?
+
What
An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.
The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.
When developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.
Cloud computing is the delivery of computing as a service over the network, rather than a product running on a local machine. This means the actual hardware and software is located at a remote location, typically, at a large server farm, while users access them over the network. Maintenance of the hardware and software is managed by the cloud provider while users typically pay for only the amount of services they use. This model is similar to the consumption of electricity; the power company manages the power plant, while the consumers pay them only for the electricity used. The cloud computing model optimizes hardware and software utilization and reduces the cost to consumers. Furthermore, users can scale up/down their utilization at will without having to upgrade their hardware and software. The traditional non-cloud model of computing is similar to everyone buying their own generators to create electricity for their own use.
Cloud computing can deliver computing services at three levels:
Infrastructure as a service (IaaS) delivers computer infrastructure as a service. For example, a user can deploy virtual servers on the cloud instead of buying physical hardware and installing server software on them. Another example would be a customer using storage space on the cloud for off-site storage of data. Rackspace is an example of an IaaS cloud provider. Amazon Elastic Compute Cloud (Amazon EC2) is another one.
Platform as a service (PaaS) provides a platform on which developers can build applications. Developers do not have to worry about infrastructure issues such as deploying servers or load balancing as is required when using IaaS. Those aspects are automatically taken care of by the platform. The price to pay is reduced flexibility; applications written on PaaS are limited to facilities provided by the platform. A PaaS example is the Google App Engine where developers can build applications using Java, Python, PHP, or Go whereas Amazon EC2 allows users to deploy applications written in any language on their virtual servers.
Software as a service (SaaS) allows applications to be accessed over the network instead of installing them on a local machine. For example, Google Docs is a SaaS word processing software, while Microsoft Word is a traditional word processing software.
Exercises:
Google Calendar is in which category?
+
Cloud computing
What
Can explain cloud computing
Cloud computing is the delivery of computing as a service over the network, rather than a product running on a local machine. This means the actual hardware and software is located at a remote location, typically, at a large server farm, while users access them over the network. Maintenance of the hardware and software is managed by the cloud provider while users typically pay for only the amount of services they use. This model is similar to the consumption of electricity; the power company manages the power plant, while the consumers pay them only for the electricity used. The cloud computing model optimizes hardware and software utilization and reduces the cost to consumers. Furthermore, users can scale up/down their utilization at will without having to upgrade their hardware and software. The traditional non-cloud model of computing is similar to everyone buying their own generators to create electricity for their own use.
Cloud computing can deliver computing services at three levels:
Infrastructure as a service (IaaS) delivers computer infrastructure as a service. For example, a user can deploy virtual servers on the cloud instead of buying physical hardware and installing server software on them. Another example would be a customer using storage space on the cloud for off-site storage of data. Rackspace is an example of an IaaS cloud provider. Amazon Elastic Compute Cloud (Amazon EC2) is another one.
Platform as a service (PaaS) provides a platform on which developers can build applications. Developers do not have to worry about infrastructure issues such as deploying servers or load balancing as is required when using IaaS. Those aspects are automatically taken care of by the platform. The price to pay is reduced flexibility; applications written on PaaS are limited to facilities provided by the platform. A PaaS example is the Google App Engine where developers can build applications using Java, Python, PHP, or Go whereas Amazon EC2 allows users to deploy applications written in any language on their virtual servers.
Software as a service (SaaS) allows applications to be accessed over the network instead of installing them on a local machine. For example, Google Docs is a SaaS word processing software, while Microsoft Word is a traditional word processing software.
Cloud computing can deliver computing services at three levels:
Infrastructure as a service (IaaS) delivers computer infrastructure as a service. For example, a user can deploy virtual servers on the cloud instead of buying physical hardware and installing server software on them. Another example would be a customer using storage space on the cloud for off-site storage of data. Rackspace is an example of an IaaS cloud provider. Amazon Elastic Compute Cloud (Amazon EC2) is another one.
Platform as a service (PaaS) provides a platform on which developers can build applications. Developers do not have to worry about infrastructure issues such as deploying servers or load balancing as is required when using IaaS. Those aspects are automatically taken care of by the platform. The price to pay is reduced flexibility; applications written on PaaS are limited to facilities provided by the platform. A PaaS example is the Google App Engine where developers can build applications using Java, Python, PHP, or Go whereas Amazon EC2 allows users to deploy applications written in any language on their virtual servers.
Software as a service (SaaS) allows applications to be accessed over the network instead of installing them on a local machine. For example, Google Docs is a SaaS word processing software, while Microsoft Word is a traditional word processing software.
Cloud computing can deliver computing services at three levels:
Infrastructure as a service (IaaS) delivers computer infrastructure as a service. For example, a user can deploy virtual servers on the cloud instead of buying physical hardware and installing server software on them. Another example would be a customer using storage space on the cloud for off-site storage of data. Rackspace is an example of an IaaS cloud provider. Amazon Elastic Compute Cloud (Amazon EC2) is another one.
Platform as a service (PaaS) provides a platform on which developers can build applications. Developers do not have to worry about infrastructure issues such as deploying servers or load balancing as is required when using IaaS. Those aspects are automatically taken care of by the platform. The price to pay is reduced flexibility; applications written on PaaS are limited to facilities provided by the platform. A PaaS example is the Google App Engine where developers can build applications using Java, Python, PHP, or Go whereas Amazon EC2 allows users to deploy applications written in any language on their virtual servers.
Software as a service (SaaS) allows applications to be accessed over the network instead of installing them on a local machine. For example, Google Docs is a SaaS word processing software, while Microsoft Word is a traditional word processing software.
Cloud computing is the delivery of computing as a service over the network, rather than a product running on a local machine. This means the actual hardware and software is located at a remote location, typically, at a large server farm, while users access them over the network. Maintenance of the hardware and software is managed by the cloud provider while users typically pay for only the amount of services they use. This model is similar to the consumption of electricity; the power company manages the power plant, while the consumers pay them only for the electricity used. The cloud computing model optimizes hardware and software utilization and reduces the cost to consumers. Furthermore, users can scale up/down their utilization at will without having to upgrade their hardware and software. The traditional non-cloud model of computing is similar to everyone buying their own generators to create electricity for their own use.
+
What
Cloud computing is the delivery of computing as a service over the network, rather than a product running on a local machine. This means the actual hardware and software is located at a remote location, typically, at a large server farm, while users access them over the network. Maintenance of the hardware and software is managed by the cloud provider while users typically pay for only the amount of services they use. This model is similar to the consumption of electricity; the power company manages the power plant, while the consumers pay them only for the electricity used. The cloud computing model optimizes hardware and software utilization and reduces the cost to consumers. Furthermore, users can scale up/down their utilization at will without having to upgrade their hardware and software. The traditional non-cloud model of computing is similar to everyone buying their own generators to create electricity for their own use.
Can differentiate between frameworks and libraries
Implementation → Reuse → Frameworks →
-
Frameworks versus libraries
Although both frameworks and libraries are reuse mechanisms, there are notable differences:
Libraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended.e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.
Your code calls the library code while the framework code calls your code.Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.
Exercises:
Statement about software frameworks
Which are frameworks?
+
Frameworks versus libraries
Although both frameworks and libraries are reuse mechanisms, there are notable differences:
Libraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended.e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.
Your code calls the library code while the framework code calls your code.Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.
The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.
Running example:
IDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.
A software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.
Running example:
Eclipse is an IDE framework that can be used to create IDEs for different programming languages.
Some frameworks provide a complete implementation of a default behavior which makes them immediately usable.
Running example:
Eclipse is a fully functional Java IDE out-of-the-box.
A framework facilitates the adaptation and customization of some desired functionality.
Running example:
The Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.
Some frameworks cover only a specific component or an aspect.
JavaFX is a framework for creating Java GUIs. Tkinter is a GUI framework for Python.
More examples of frameworks
Frameworks for web-based applications: Drupal (PHP), Django (Python), Ruby on Rails (Ruby), Spring (Java)
Frameworks for testing: JUnit (Java), unittest (Python), Jest (JavaScript)
Frameworks versus libraries
Can differentiate between frameworks and libraries
Although both frameworks and libraries are reuse mechanisms, there are notable differences:
Libraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended.e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.
Your code calls the library code while the framework code calls your code.Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.
Exercises:
Statement about software frameworks
Which are frameworks?
+
Frameworks
What
Can explain frameworks
The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.
Running example:
IDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.
A software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.
Running example:
Eclipse is an IDE framework that can be used to create IDEs for different programming languages.
Some frameworks provide a complete implementation of a default behavior which makes them immediately usable.
Running example:
Eclipse is a fully functional Java IDE out-of-the-box.
A framework facilitates the adaptation and customization of some desired functionality.
Running example:
The Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.
Some frameworks cover only a specific component or an aspect.
JavaFX is a framework for creating Java GUIs. Tkinter is a GUI framework for Python.
More examples of frameworks
Frameworks for web-based applications: Drupal (PHP), Django (Python), Ruby on Rails (Ruby), Spring (Java)
Frameworks for testing: JUnit (Java), unittest (Python), Jest (JavaScript)
Frameworks versus libraries
Can differentiate between frameworks and libraries
Although both frameworks and libraries are reuse mechanisms, there are notable differences:
Libraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended.e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.
Your code calls the library code while the framework code calls your code.Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.
The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.
Running example:
IDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.
A software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.
Running example:
Eclipse is an IDE framework that can be used to create IDEs for different programming languages.
Some frameworks provide a complete implementation of a default behavior which makes them immediately usable.
Running example:
Eclipse is a fully functional Java IDE out-of-the-box.
A framework facilitates the adaptation and customization of some desired functionality.
Running example:
The Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.
Some frameworks cover only a specific component or an aspect.
JavaFX is a framework for creating Java GUIs. Tkinter is a GUI framework for Python.
More examples of frameworks
Frameworks for web-based applications: Drupal (PHP), Django (Python), Ruby on Rails (Ruby), Spring (Java)
Frameworks for testing: JUnit (Java), unittest (Python), Jest (JavaScript)
+
What
The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.
Running example:
IDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.
A software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.
Running example:
Eclipse is an IDE framework that can be used to create IDEs for different programming languages.
Some frameworks provide a complete implementation of a default behavior which makes them immediately usable.
Running example:
Eclipse is a fully functional Java IDE out-of-the-box.
A framework facilitates the adaptation and customization of some desired functionality.
Running example:
The Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.
Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software.
When
Can explain the costs and benefits of reuse
While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:
The reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.
The reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.
Non-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.
The license of the reused software (or its dependencies) restrict how you can use/develop your software.
The reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.
Malicious code can sneak into your product via compromised dependencies.
Exercises:
Using a cool UI framework
APIs
What
Can explain APIs
An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.
The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.
When developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.
Exercises:
Statements about APIs
True or False?
Designing APIs
Can design reasonable quality APIs
An API should be well-designed (i.e. should cater for the needs of its users) and well-documented.
When you write software consisting of multiple components, you need to define the API of each component.
One approach is to let the API emerge and evolve over time as you write code.
Another approach is to define the API up-front. Doing so allows us to develop the components in parallel.
You can use UML sequence diagrams to analyze the required interactions between components in order to discover the required API. Given below is an example.
Example:
As you analyze the interactions between components using sequence diagrams, you discover the API of those components. For example, the diagram above tells us that the MSLogic component API should have the methods:
new()
getWidth:int
getHeight():int
getRemainingMineCount():int
More details can be included to increase the precision of the method definitions before coding. Such precision is important to avoid misunderstandings between the developer of the class and developers of other classes that interact with the class.
Operation: newGame(): void
Description: Generates a new WxH minefield with M mines. Any existing minefield will be overwritten.
Preconditions: None
Postconditions: A new minefield is created. Game state is READY.
Preconditions are the conditions that must be true before calling this operation. Postconditions describe the system after the operation is complete. Note that postconditions do not say what happens during the operation. Here is another example:
Operation: clearCellAt(int x, int y): void
Description: Records the cell at x, y as cleared.
Parameters: x, y coordinates of the cell
Preconditions: game state is READY or IN_PLAY. x and y are in 0..(H-1) and 0..(W-1), respectively.
Postconditions: Cell at x, y changes state to ZERO, ONE, TWO, THREE, …, EIGHT, or INCORRECTLY_CLEARED. Game state changes to IN_PLAY, WON or LOST as appropriate.
Libraries
What
Can explain libraries
A library is a collection of modular code that is general and can be used by other programs.
Java classes you get with the JDK (such as String, ArrayList, HashMap, etc.) are library classes that are provided in the default Java distribution.
Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008
built-in modules you get with Python (such as csv, random, sys, etc.) are libraries that are provided in the default Python distribution. Classes such as list, str, dict are built-in library classes that you get with Python.
Colorama is a Python library that can be used for colorizing text in a CLI.
How
Can make use of a library
These are the typical steps required to use a library:
Read the documentation to confirm that its functionality fits your needs.
Check the license to confirm that it allows reuse in the way you plan to reuse it. For example, some libraries might allow non-commercial use only.
Download the library and make it accessible to your project. Alternatively, you can configure your to do it for you.
Call the library API from your code where you need to use the library's functionality.
Frameworks
What
Can explain frameworks
The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.
Running example:
IDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.
A software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.
Running example:
Eclipse is an IDE framework that can be used to create IDEs for different programming languages.
Some frameworks provide a complete implementation of a default behavior which makes them immediately usable.
Running example:
Eclipse is a fully functional Java IDE out-of-the-box.
A framework facilitates the adaptation and customization of some desired functionality.
Running example:
The Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.
Some frameworks cover only a specific component or an aspect.
JavaFX is a framework for creating Java GUIs. Tkinter is a GUI framework for Python.
More examples of frameworks
Frameworks for web-based applications: Drupal (PHP), Django (Python), Ruby on Rails (Ruby), Spring (Java)
Frameworks for testing: JUnit (Java), unittest (Python), Jest (JavaScript)
Frameworks versus libraries
Can differentiate between frameworks and libraries
Although both frameworks and libraries are reuse mechanisms, there are notable differences:
Libraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended.e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.
Your code calls the library code while the framework code calls your code.Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.
Exercises:
Statement about software frameworks
Which are frameworks?
Platforms
What
Can explain platforms
A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.
Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.
Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.
JavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.
.NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines.
Cloud computing
What
Can explain cloud computing
Cloud computing is the delivery of computing as a service over the network, rather than a product running on a local machine. This means the actual hardware and software is located at a remote location, typically, at a large server farm, while users access them over the network. Maintenance of the hardware and software is managed by the cloud provider while users typically pay for only the amount of services they use. This model is similar to the consumption of electricity; the power company manages the power plant, while the consumers pay them only for the electricity used. The cloud computing model optimizes hardware and software utilization and reduces the cost to consumers. Furthermore, users can scale up/down their utilization at will without having to upgrade their hardware and software. The traditional non-cloud model of computing is similar to everyone buying their own generators to create electricity for their own use.
Cloud computing can deliver computing services at three levels:
Infrastructure as a service (IaaS) delivers computer infrastructure as a service. For example, a user can deploy virtual servers on the cloud instead of buying physical hardware and installing server software on them. Another example would be a customer using storage space on the cloud for off-site storage of data. Rackspace is an example of an IaaS cloud provider. Amazon Elastic Compute Cloud (Amazon EC2) is another one.
Platform as a service (PaaS) provides a platform on which developers can build applications. Developers do not have to worry about infrastructure issues such as deploying servers or load balancing as is required when using IaaS. Those aspects are automatically taken care of by the platform. The price to pay is reduced flexibility; applications written on PaaS are limited to facilities provided by the platform. A PaaS example is the Google App Engine where developers can build applications using Java, Python, PHP, or Go whereas Amazon EC2 allows users to deploy applications written in any language on their virtual servers.
Software as a service (SaaS) allows applications to be accessed over the network instead of installing them on a local machine. For example, Google Docs is a SaaS word processing software, while Microsoft Word is a traditional word processing software.
Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software.
When
Can explain the costs and benefits of reuse
While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:
The reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.
The reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.
Non-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.
The license of the reused software (or its dependencies) restrict how you can use/develop your software.
The reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.
Malicious code can sneak into your product via compromised dependencies.
Exercises:
Using a cool UI framework
APIs
What
Can explain APIs
An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.
The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.
When developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.
Exercises:
Statements about APIs
True or False?
Designing APIs
Can design reasonable quality APIs
An API should be well-designed (i.e. should cater for the needs of its users) and well-documented.
When you write software consisting of multiple components, you need to define the API of each component.
One approach is to let the API emerge and evolve over time as you write code.
Another approach is to define the API up-front. Doing so allows us to develop the components in parallel.
You can use UML sequence diagrams to analyze the required interactions between components in order to discover the required API. Given below is an example.
Example:
As you analyze the interactions between components using sequence diagrams, you discover the API of those components. For example, the diagram above tells us that the MSLogic component API should have the methods:
new()
getWidth:int
getHeight():int
getRemainingMineCount():int
More details can be included to increase the precision of the method definitions before coding. Such precision is important to avoid misunderstandings between the developer of the class and developers of other classes that interact with the class.
Operation: newGame(): void
Description: Generates a new WxH minefield with M mines. Any existing minefield will be overwritten.
Preconditions: None
Postconditions: A new minefield is created. Game state is READY.
Preconditions are the conditions that must be true before calling this operation. Postconditions describe the system after the operation is complete. Note that postconditions do not say what happens during the operation. Here is another example:
Operation: clearCellAt(int x, int y): void
Description: Records the cell at x, y as cleared.
Parameters: x, y coordinates of the cell
Preconditions: game state is READY or IN_PLAY. x and y are in 0..(H-1) and 0..(W-1), respectively.
Postconditions: Cell at x, y changes state to ZERO, ONE, TWO, THREE, …, EIGHT, or INCORRECTLY_CLEARED. Game state changes to IN_PLAY, WON or LOST as appropriate.
Libraries
What
Can explain libraries
A library is a collection of modular code that is general and can be used by other programs.
Java classes you get with the JDK (such as String, ArrayList, HashMap, etc.) are library classes that are provided in the default Java distribution.
Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008
built-in modules you get with Python (such as csv, random, sys, etc.) are libraries that are provided in the default Python distribution. Classes such as list, str, dict are built-in library classes that you get with Python.
Colorama is a Python library that can be used for colorizing text in a CLI.
How
Can make use of a library
These are the typical steps required to use a library:
Read the documentation to confirm that its functionality fits your needs.
Check the license to confirm that it allows reuse in the way you plan to reuse it. For example, some libraries might allow non-commercial use only.
Download the library and make it accessible to your project. Alternatively, you can configure your to do it for you.
Call the library API from your code where you need to use the library's functionality.
Frameworks
What
Can explain frameworks
The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.
Running example:
IDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.
A software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.
Running example:
Eclipse is an IDE framework that can be used to create IDEs for different programming languages.
Some frameworks provide a complete implementation of a default behavior which makes them immediately usable.
Running example:
Eclipse is a fully functional Java IDE out-of-the-box.
A framework facilitates the adaptation and customization of some desired functionality.
Running example:
The Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.
Some frameworks cover only a specific component or an aspect.
JavaFX is a framework for creating Java GUIs. Tkinter is a GUI framework for Python.
More examples of frameworks
Frameworks for web-based applications: Drupal (PHP), Django (Python), Ruby on Rails (Ruby), Spring (Java)
Frameworks for testing: JUnit (Java), unittest (Python), Jest (JavaScript)
Frameworks versus libraries
Can differentiate between frameworks and libraries
Although both frameworks and libraries are reuse mechanisms, there are notable differences:
Libraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended.e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.
Your code calls the library code while the framework code calls your code.Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.
Exercises:
Statement about software frameworks
Which are frameworks?
Platforms
What
Can explain platforms
A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.
Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.
Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.
JavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.
.NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines.
Cloud computing
What
Can explain cloud computing
Cloud computing is the delivery of computing as a service over the network, rather than a product running on a local machine. This means the actual hardware and software is located at a remote location, typically, at a large server farm, while users access them over the network. Maintenance of the hardware and software is managed by the cloud provider while users typically pay for only the amount of services they use. This model is similar to the consumption of electricity; the power company manages the power plant, while the consumers pay them only for the electricity used. The cloud computing model optimizes hardware and software utilization and reduces the cost to consumers. Furthermore, users can scale up/down their utilization at will without having to upgrade their hardware and software. The traditional non-cloud model of computing is similar to everyone buying their own generators to create electricity for their own use.
Cloud computing can deliver computing services at three levels:
Infrastructure as a service (IaaS) delivers computer infrastructure as a service. For example, a user can deploy virtual servers on the cloud instead of buying physical hardware and installing server software on them. Another example would be a customer using storage space on the cloud for off-site storage of data. Rackspace is an example of an IaaS cloud provider. Amazon Elastic Compute Cloud (Amazon EC2) is another one.
Platform as a service (PaaS) provides a platform on which developers can build applications. Developers do not have to worry about infrastructure issues such as deploying servers or load balancing as is required when using IaaS. Those aspects are automatically taken care of by the platform. The price to pay is reduced flexibility; applications written on PaaS are limited to facilities provided by the platform. A PaaS example is the Google App Engine where developers can build applications using Java, Python, PHP, or Go whereas Amazon EC2 allows users to deploy applications written in any language on their virtual servers.
Software as a service (SaaS) allows applications to be accessed over the network instead of installing them on a local machine. For example, Google Docs is a SaaS word processing software, while Microsoft Word is a traditional word processing software.
Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software.
When
Can explain the costs and benefits of reuse
While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:
The reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.
The reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.
Non-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.
The license of the reused software (or its dependencies) restrict how you can use/develop your software.
The reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.
Malicious code can sneak into your product via compromised dependencies.
Exercises:
Using a cool UI framework
+
Introduction
What
Can explain software reuse
Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software.
When
Can explain the costs and benefits of reuse
While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:
The reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.
The reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.
Non-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.
The license of the reused software (or its dependencies) restrict how you can use/develop your software.
The reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.
Malicious code can sneak into your product via compromised dependencies.
Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software.
+
What
Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software.
While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:
The reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.
The reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.
Non-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.
The license of the reused software (or its dependencies) restrict how you can use/develop your software.
The reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.
Malicious code can sneak into your product via compromised dependencies.
Exercises:
Using a cool UI framework
+
When
While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:
The reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.
The reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.
Non-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.
The license of the reused software (or its dependencies) restrict how you can use/develop your software.
The reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.
Malicious code can sneak into your product via compromised dependencies.
A library is a collection of modular code that is general and can be used by other programs.
Java classes you get with the JDK (such as String, ArrayList, HashMap, etc.) are library classes that are provided in the default Java distribution.
Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008
built-in modules you get with Python (such as csv, random, sys, etc.) are libraries that are provided in the default Python distribution. Classes such as list, str, dict are built-in library classes that you get with Python.
Colorama is a Python library that can be used for colorizing text in a CLI.
How
Can make use of a library
These are the typical steps required to use a library:
Read the documentation to confirm that its functionality fits your needs.
Check the license to confirm that it allows reuse in the way you plan to reuse it. For example, some libraries might allow non-commercial use only.
Download the library and make it accessible to your project. Alternatively, you can configure your to do it for you.
Call the library API from your code where you need to use the library's functionality.
+
Libraries
What
Can explain libraries
A library is a collection of modular code that is general and can be used by other programs.
Java classes you get with the JDK (such as String, ArrayList, HashMap, etc.) are library classes that are provided in the default Java distribution.
Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008
built-in modules you get with Python (such as csv, random, sys, etc.) are libraries that are provided in the default Python distribution. Classes such as list, str, dict are built-in library classes that you get with Python.
Colorama is a Python library that can be used for colorizing text in a CLI.
How
Can make use of a library
These are the typical steps required to use a library:
Read the documentation to confirm that its functionality fits your needs.
Check the license to confirm that it allows reuse in the way you plan to reuse it. For example, some libraries might allow non-commercial use only.
Download the library and make it accessible to your project. Alternatively, you can configure your to do it for you.
Call the library API from your code where you need to use the library's functionality.
A library is a collection of modular code that is general and can be used by other programs.
Java classes you get with the JDK (such as String, ArrayList, HashMap, etc.) are library classes that are provided in the default Java distribution.
Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008
built-in modules you get with Python (such as csv, random, sys, etc.) are libraries that are provided in the default Python distribution. Classes such as list, str, dict are built-in library classes that you get with Python.
Colorama is a Python library that can be used for colorizing text in a CLI.
+
What
A library is a collection of modular code that is general and can be used by other programs.
Java classes you get with the JDK (such as String, ArrayList, HashMap, etc.) are library classes that are provided in the default Java distribution.
Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008
built-in modules you get with Python (such as csv, random, sys, etc.) are libraries that are provided in the default Python distribution. Classes such as list, str, dict are built-in library classes that you get with Python.
Colorama is a Python library that can be used for colorizing text in a CLI.
A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.
Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.
Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.
JavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.
.NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines.
+
Platforms
What
Can explain platforms
A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.
Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.
Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.
JavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.
.NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines.
A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.
Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.
Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.
JavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.
.NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines.
+
What
A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.
Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.
Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.
JavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.
.NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines.
Branching is the process of evolving multiple versions of the software in parallel. For example, one team member can create a new branch and add an experimental feature to it while the rest of the team keeps working on another branch. Branches can be given names e.g. master, release, dev.
A branch can be merged into another branch. Merging usually results in a new commit that represents the changes done in the branch being merged.
Branching and merging
Merge conflicts happen when you try to merge two branches that had changed the same part of the code and the RCS cannot decide which changes to keep. In those cases, you have to ‘resolve’ the conflicts manually.
+
Branching
Branching is the process of evolving multiple versions of the software in parallel. For example, one team member can create a new branch and add an experimental feature to it while the rest of the team keeps working on another branch. Branches can be given names e.g. master, release, dev.
A branch can be merged into another branch. Merging usually results in a new commit that represents the changes done in the branch being merged.
Branching and merging
Merge conflicts happen when you try to merge two branches that had changed the same part of the code and the RCS cannot decide which changes to keep. In those cases, you have to ‘resolve’ the conflicts manually.
RCS can be done in two ways: the centralized way and the distributed way.
Centralized RCS (CRCS for short) uses a central remote repo that is shared by the team. Team members download (‘pull’) and upload (‘push’) changes between their own local repositories and the central repository. Older RCS tools such as CVS and SVN support only this model. Note that these older RCS do not support the notion of a local repo either. Instead, they force users to do all the versioning with the remote repo.
The centralized RCS approach without any local repos (e.g., CVS, SVN)
Distributed RCS (DRCS for short, also known as Decentralized RCS) allows multiple remote repos and pulling and pushing can be done among them in arbitrary ways. The workflow can vary differently from team to team. For example, every team member can have his/her own remote repository in addition to their own local repository, as shown in the diagram below. Git and Mercurial are some prominent RCS tools that support the distributed approach.
The decentralized RCS approach
+
DRCS vs CRCS
RCS can be done in two ways: the centralized way and the distributed way.
Centralized RCS (CRCS for short) uses a central remote repo that is shared by the team. Team members download (‘pull’) and upload (‘push’) changes between their own local repositories and the central repository. Older RCS tools such as CVS and SVN support only this model. Note that these older RCS do not support the notion of a local repo either. Instead, they force users to do all the versioning with the remote repo.
The centralized RCS approach without any local repos (e.g., CVS, SVN)
Distributed RCS (DRCS for short, also known as Decentralized RCS) allows multiple remote repos and pulling and pushing can be done among them in arbitrary ways. The workflow can vary differently from team to team. For example, every team member can have his/her own remote repository in addition to their own local repository, as shown in the diagram below. Git and Mercurial are some prominent RCS tools that support the distributed approach.
Feature branch workflow is similar to forking workflow except there are no forks. Everyone is pushing/pulling from the same remote repo. The phrase feature branch is used because each new feature (or bug fix, or any other modification) is done in a separate branch and merged to the master branch when ready. Pull requests can still be created within the central repository, from the feature branch to the main branch.
As this workflow require all team members to have write access to the repository,
it is better to protect the main branch using some mechanism, to reduce the risk of accidental undesirable changes to it.
it is not suitable for situations where the code contributors are not 'trusted' enough to be given write permission.
Feature branch workflow is similar to forking workflow except there are no forks. Everyone is pushing/pulling from the same remote repo. The phrase feature branch is used because each new feature (or bug fix, or any other modification) is done in a separate branch and merged to the master branch when ready. Pull requests can still be created within the central repository, from the feature branch to the main branch.
As this workflow require all team members to have write access to the repository,
it is better to protect the main branch using some mechanism, to reduce the risk of accidental undesirable changes to it.
it is not suitable for situations where the code contributors are not 'trusted' enough to be given write permission.
Remote repositories are repos that are hosted on remote computers and allow remote access. They are especially useful for sharing the revision history of a codebase among team members of a multi-person project. They can also serve as a remote backup of your codebase.
It is possible to set up your own remote repo on a server, but the easier option is to use a remote repo hosting service such as GitHub or BitBucket.
You can clone a repo to create a copy of that repo in another location on your computer. The copy will even have the revision history of the original repo i.e., identical to the original repo. For example, you can clone a remote repo onto your computer to create a local copy of the remote repo.
When you clone from a repo, the original repo is commonly referred to as the upstream repo. A repo can have multiple upstream repos. For example, let's say a repo repo1 was cloned as repo2 which was then cloned as repo3. In this case, repo1 and repo2 are upstream repos of repo3.
You can pull from one repo to another, to receive new commits in the second repo, but only if the repos have a shared history. Let's say some new commits were added to the after you cloned it and you would like to copy over those new commits to your own clone i.e., sync your clone with the upstream repo. In that case, you pull from the upstream repo to your clone.
You can push new commits in one repo to another repo which will copy the new commits onto the destination repo. Note that pushing to a repo requires you to have write-access to it. Furthermore, you can push between repos only if those repos have a shared history among them (i.e., one was created by copying the other at some point in the past).
Cloning, pushing, and pulling can be done between two local repos too, although it is more common for them to involve a remote repo.
A repo can work with any number of other repositories as long as they have a shared history e.g., repo1 can pull from (or push to) repo2 and repo3 if they have a shared history between them.
A fork is a remote copy of a remote repo. As you know, cloning creates a local copy of a repo. In contrast, forking creates a remote copy of a Git repo hosted on GitHub. This is particularly useful if you want to play around with a GitHub repo but you don't have write permissions to it; you can simply fork the repo and do whatever you want with the fork as you are the owner of the fork.
A pull request (PR for short) is a mechanism for contributing code to a remote repo, i.e., "I'm requesting you to pull my proposed changes to your repo". For this to work, the two repos must have a shared history. The most common case is sending PRs from a fork to its repo.
Here is a scenario that includes all the concepts introduced above (click inside the slide to advance the animation):
+
Remote repositories
Remote repositories are repos that are hosted on remote computers and allow remote access. They are especially useful for sharing the revision history of a codebase among team members of a multi-person project. They can also serve as a remote backup of your codebase.
It is possible to set up your own remote repo on a server, but the easier option is to use a remote repo hosting service such as GitHub or BitBucket.
You can clone a repo to create a copy of that repo in another location on your computer. The copy will even have the revision history of the original repo i.e., identical to the original repo. For example, you can clone a remote repo onto your computer to create a local copy of the remote repo.
When you clone from a repo, the original repo is commonly referred to as the upstream repo. A repo can have multiple upstream repos. For example, let's say a repo repo1 was cloned as repo2 which was then cloned as repo3. In this case, repo1 and repo2 are upstream repos of repo3.
You can pull from one repo to another, to receive new commits in the second repo, but only if the repos have a shared history. Let's say some new commits were added to the after you cloned it and you would like to copy over those new commits to your own clone i.e., sync your clone with the upstream repo. In that case, you pull from the upstream repo to your clone.
You can push new commits in one repo to another repo which will copy the new commits onto the destination repo. Note that pushing to a repo requires you to have write-access to it. Furthermore, you can push between repos only if those repos have a shared history among them (i.e., one was created by copying the other at some point in the past).
Cloning, pushing, and pulling can be done between two local repos too, although it is more common for them to involve a remote repo.
A repo can work with any number of other repositories as long as they have a shared history e.g., repo1 can pull from (or push to) repo2 and repo3 if they have a shared history between them.
A fork is a remote copy of a remote repo. As you know, cloning creates a local copy of a repo. In contrast, forking creates a remote copy of a Git repo hosted on GitHub. This is particularly useful if you want to play around with a GitHub repo but you don't have write permissions to it; you can simply fork the repo and do whatever you want with the fork as you are the owner of the fork.
A pull request (PR for short) is a mechanism for contributing code to a remote repo, i.e., "I'm requesting you to pull my proposed changes to your repo". For this to work, the two repos must have a shared history. The most common case is sending PRs from a fork to its repo.
Here is a scenario that includes all the concepts introduced above (click inside the slide to advance the animation):
The repository is the database that stores the revision history. Suppose you want to apply revision control on files in a directory called ProjectFoo. In that case, you need to set up a repo (short for repository) in the ProjectFoo directory, which is referred to as the working directory of the repo. For example, Git uses a hidden folder named .git inside the working directory, to store the database of the working directory's revision history.
Repository (repo for short): The database of the history of a directory being tracked by an RCS software (e.g. Git).
Working directory: the root directory revision-controlled by Git (e.g., the directory in which the repo was initialized).
You can have multiple repos in your computer, each repo revision-controlling files of a different working directory, for examples, files of different projects.
+
Repositories
The repository is the database that stores the revision history. Suppose you want to apply revision control on files in a directory called ProjectFoo. In that case, you need to set up a repo (short for repository) in the ProjectFoo directory, which is referred to as the working directory of the repo. For example, Git uses a hidden folder named .git inside the working directory, to store the database of the working directory's revision history.
Repository (repo for short): The database of the history of a directory being tracked by an RCS software (e.g. Git).
Working directory: the root directory revision-controlled by Git (e.g., the directory in which the repo was initialized).
You can have multiple repos in your computer, each repo revision-controlling files of a different working directory, for examples, files of different projects.
Can explain basic concepts of how RCS history is used
Project Management → Revision Control →
-
Using history
RCS tools store the history of the working directory as a series of commits. This means you should commit after each change that you want the RCS to 'remember'.
Each commit in a repo is a recorded point in the history of the project that is uniquely identified by an auto-generated hash e.g. a16043703f28e5b3dab95915f5c5e5bf4fdc5fc1.
You can tag a specific commit with a more easily identifiable name e.g. v1.0.2.
To see what changed between two points of the history, you can ask the RCS tool to diff the two commits in concern.
To restore the state of the working directory at a point in the past, you can checkout the commit in concern. i.e., you can traverse the history of the working directory simply by checking out the commits you are interested in.
+
Using history
RCS tools store the history of the working directory as a series of commits. This means you should commit after each change that you want the RCS to 'remember'.
Each commit in a repo is a recorded point in the history of the project that is uniquely identified by an auto-generated hash e.g. a16043703f28e5b3dab95915f5c5e5bf4fdc5fc1.
You can tag a specific commit with a more easily identifiable name e.g. v1.0.2.
To see what changed between two points of the history, you can ask the RCS tool to diff the two commits in concern.
To restore the state of the working directory at a point in the past, you can checkout the commit in concern. i.e., you can traverse the history of the working directory simply by checking out the commits you are interested in.
Revision control is the process of managing multiple versions of a piece of information. In its simplest form, this is something that many people do by hand: every time you modify a file, save it under a new name that contains a number, each one higher than the number of the preceding version.
Manually managing multiple versions of even a single file is an error-prone task, though, so software tools to help automate this process have long been available. The earliest automated revision control tools were intended to help a single user to manage revisions of a single file. Over the past few decades, the scope of revision control tools has expanded greatly; they now manage multiple files, and help multiple people to work together. The best modern revision control tools have no problem coping with thousands of people working together on projects that consist of hundreds of thousands of files.
Revision control software will track the history and evolution of your project, so you don't have to. For every change, you'll have a log of who made it; why they made it; when they made it; and what the change was.
Revision control software makes it easier for you to collaborate when you're working with other people. For example, when people more or less simultaneously make potentially incompatible changes, the software will help you to identify and resolve those conflicts.
It can help you to recover from mistakes. If you make a change that later turns out to be an error, you can revert to an earlier version of one or more files. In fact, a really good revision control tool will even help you to efficiently figure out exactly when a problem was introduced.
It will help you to work simultaneously on, and manage the drift between, multiple versions of your project. Most of these reasons are equally valid, at least in theory, whether you're working on a project by yourself, or with a hundred other people.
-- [adapted from bryan-mercurial-guide]
Revision: A revision (some seem to use it interchangeably with version while others seem to distinguish the two -- here, let us treat them as the same, for simplicity) is a state of a piece of information at a specific time that is a result of some changes to it e.g., if you modify the code and save the file, you have a new revision (or a new version) of that file.
RCS: Revision control software are the software tools that automate the process of Revision Control i.e. managing revisions of software artifacts.
Revision control software are also known as Version Control Software (VCS), and by a few other names.
Git is the most widely used RCS today. Other RCS tools include Mercurial, Subversion (SVN), Perforce, CVS (Concurrent Versions System), Bazaar, TFS (Team Foundation Server), and Clearcase.
Github is a web-based project hosting platform for projects using Git for revision control. Other similar services include GitLab, BitBucket, and SourceForge.
+
What
Revision control is the process of managing multiple versions of a piece of information. In its simplest form, this is something that many people do by hand: every time you modify a file, save it under a new name that contains a number, each one higher than the number of the preceding version.
Manually managing multiple versions of even a single file is an error-prone task, though, so software tools to help automate this process have long been available. The earliest automated revision control tools were intended to help a single user to manage revisions of a single file. Over the past few decades, the scope of revision control tools has expanded greatly; they now manage multiple files, and help multiple people to work together. The best modern revision control tools have no problem coping with thousands of people working together on projects that consist of hundreds of thousands of files.
Revision control software will track the history and evolution of your project, so you don't have to. For every change, you'll have a log of who made it; why they made it; when they made it; and what the change was.
Revision control software makes it easier for you to collaborate when you're working with other people. For example, when people more or less simultaneously make potentially incompatible changes, the software will help you to identify and resolve those conflicts.
It can help you to recover from mistakes. If you make a change that later turns out to be an error, you can revert to an earlier version of one or more files. In fact, a really good revision control tool will even help you to efficiently figure out exactly when a problem was introduced.
It will help you to work simultaneously on, and manage the drift between, multiple versions of your project. Most of these reasons are equally valid, at least in theory, whether you're working on a project by yourself, or with a hundred other people.
-- [adapted from bryan-mercurial-guide]
Revision: A revision (some seem to use it interchangeably with version while others seem to distinguish the two -- here, let us treat them as the same, for simplicity) is a state of a piece of information at a specific time that is a result of some changes to it e.g., if you modify the code and save the file, you have a new revision (or a new version) of that file.
RCS: Revision control software are the software tools that automate the process of Revision Control i.e. managing revisions of software artifacts.
Revision control software are also known as Version Control Software (VCS), and by a few other names.
Git is the most widely used RCS today. Other RCS tools include Mercurial, Subversion (SVN), Perforce, CVS (Concurrent Versions System), Bazaar, TFS (Team Foundation Server), and Clearcase.
Github is a web-based project hosting platform for projects using Git for revision control. Other similar services include GitLab, BitBucket, and SourceForge.
A textual description (i.e. prose) can be used to describe requirements. Prose is especially useful when describing abstract ideas such as the vision of a product.
The product vision of the TEAMMATES Project given below is described using prose.
TEAMMATES aims to become the biggest student project in the world (biggest here refers to 'many contributors, many users, large code base, evolving over a long period'). Furthermore, it aims to serve as a training tool for Software Engineering students who want to learn SE skills in the context of a non-trivial real software product.
Avoid using lengthy prose to describe requirements; they can be hard to follow.
+
Prose
What
Can explain prose
A textual description (i.e. prose) can be used to describe requirements. Prose is especially useful when describing abstract ideas such as the vision of a product.
The product vision of the TEAMMATES Project given below is described using prose.
TEAMMATES aims to become the biggest student project in the world (biggest here refers to 'many contributors, many users, large code base, evolving over a long period'). Furthermore, it aims to serve as a training tool for Software Engineering students who want to learn SE skills in the context of a non-trivial real software product.
Avoid using lengthy prose to describe requirements; they can be hard to follow.
A textual description (i.e. prose) can be used to describe requirements. Prose is especially useful when describing abstract ideas such as the vision of a product.
The product vision of the TEAMMATES Project given below is described using prose.
TEAMMATES aims to become the biggest student project in the world (biggest here refers to 'many contributors, many users, large code base, evolving over a long period'). Furthermore, it aims to serve as a training tool for Software Engineering students who want to learn SE skills in the context of a non-trivial real software product.
Avoid using lengthy prose to describe requirements; they can be hard to follow.
+
What
A textual description (i.e. prose) can be used to describe requirements. Prose is especially useful when describing abstract ideas such as the vision of a product.
The product vision of the TEAMMATES Project given below is described using prose.
TEAMMATES aims to become the biggest student project in the world (biggest here refers to 'many contributors, many users, large code base, evolving over a long period'). Furthermore, it aims to serve as a training tool for Software Engineering students who want to learn SE skills in the context of a non-trivial real software product.
Avoid using lengthy prose to describe requirements; they can be hard to follow.
A supplementary requirements section can be used to capture requirements that do not fit elsewhere. Typically, this is where most Non-Functional Requirements will be listed.
+
Supplementary requirements
What
Can explain supplementary requirements
A supplementary requirements section can be used to capture requirements that do not fit elsewhere. Typically, this is where most Non-Functional Requirements will be listed.
A supplementary requirements section can be used to capture requirements that do not fit elsewhere. Typically, this is where most Non-Functional Requirements will be listed.
+
What
A supplementary requirements section can be used to capture requirements that do not fit elsewhere. Typically, this is where most Non-Functional Requirements will be listed.
Requirements → Specifying Requirements → Use Cases →
-
Usage
You can use actor generalization in use case diagrams using a symbol similar to that of UML notation for inheritance.
In this example, actor Blogger can do all the use cases the actor Guest can do, as a result of the actor generalization relationship given in the diagram.
Do not over-complicate use case diagrams by trying to include everything possible. A use case diagram is a brief summary of the use cases that is used as a starting point. Details of the use cases can be given in the use case descriptions.
Some include ‘System’ as an actor to indicate that something is done by the system itself without being initiated by a user or an external system.
The diagram below can be used to indicate that the system generates daily reports at midnight.
However, others argue that only use cases providing value to an external user/system should be shown in the use case diagram. For example, they argue that view daily report should be the use case and generate daily report is not to be shown in the use case diagram because it is simply something the system has to do to support the view daily report use case.
You are recommended to follow the latter view (i.e. not to use System as a user). Limit use cases for modeling behaviors that involve an external actor.
UML is not very specific about the text contents of a use case. Hence, there are many styles for writing use cases. For example, the steps can be written as a continuous paragraph.
Use cases should be easy to read. Note that there is no strict rule about writing all details of all steps or a need to use all the elements of a use case.
There are some advantages of documenting system requirements as use cases:
Because they use a simple notation and plain English descriptions, they are easy for users to understand and give feedback.
They decouple user intention from mechanism (note that use cases should not include UI-specific details), allowing the system designers more freedom to optimize how a functionality is provided to a user.
Identifying all possible extensions encourages us to consider all situations that a software product might face during its operation.
Separating typical scenarios from special cases encourages us to optimize the typical scenarios.
One of the main disadvantages of use cases is that they are not good for capturing requirements that do not involve a user interacting with the system. Hence, they should not be used as the sole means to specify requirements.
Exercises:
Advantages of use cases
Statements about use cases
+
Usage
You can use actor generalization in use case diagrams using a symbol similar to that of UML notation for inheritance.
In this example, actor Blogger can do all the use cases the actor Guest can do, as a result of the actor generalization relationship given in the diagram.
Do not over-complicate use case diagrams by trying to include everything possible. A use case diagram is a brief summary of the use cases that is used as a starting point. Details of the use cases can be given in the use case descriptions.
Some include ‘System’ as an actor to indicate that something is done by the system itself without being initiated by a user or an external system.
The diagram below can be used to indicate that the system generates daily reports at midnight.
However, others argue that only use cases providing value to an external user/system should be shown in the use case diagram. For example, they argue that view daily report should be the use case and generate daily report is not to be shown in the use case diagram because it is simply something the system has to do to support the view daily report use case.
You are recommended to follow the latter view (i.e. not to use System as a user). Limit use cases for modeling behaviors that involve an external actor.
UML is not very specific about the text contents of a use case. Hence, there are many styles for writing use cases. For example, the steps can be written as a continuous paragraph.
Use cases should be easy to read. Note that there is no strict rule about writing all details of all steps or a need to use all the elements of a use case.
There are some advantages of documenting system requirements as use cases:
Because they use a simple notation and plain English descriptions, they are easy for users to understand and give feedback.
They decouple user intention from mechanism (note that use cases should not include UI-specific details), allowing the system designers more freedom to optimize how a functionality is provided to a user.
Identifying all possible extensions encourages us to consider all situations that a software product might face during its operation.
Separating typical scenarios from special cases encourages us to optimize the typical scenarios.
One of the main disadvantages of use cases is that they are not good for capturing requirements that do not involve a user interacting with the system. Hence, they should not be used as the sole means to specify requirements.
Requirements → Specifying Requirements → User Stories →
-
Introduction
User story: User stories are short, simple descriptions of a feature told from the perspective of the person who desires the new capability, usually a user or customer of the system. [Mike Cohn]
A common format for writing user stories is:
User story format: As a {user type/role} I can {function} so that {benefit}
Examples (from a Learning Management System):
As a student, I can download files uploaded by lecturers, so that I can get my own copy of the files
As a lecturer, I can create discussion forums, so that students can discuss things online
As a tutor, I can print attendance sheets, so that I can take attendance during the class
You can write user stories using a physical medium or a digital tool. For example, you can use index cards or sticky notes, and arrange them on walls or tables. Alternatively, you can use a software (e.g., GitHub Project Boards, Trello, Google Docs, ...) to manage user stories digitally.
User story: User stories are short, simple descriptions of a feature told from the perspective of the person who desires the new capability, usually a user or customer of the system. [Mike Cohn]
A common format for writing user stories is:
User story format: As a {user type/role} I can {function} so that {benefit}
Examples (from a Learning Management System):
As a student, I can download files uploaded by lecturers, so that I can get my own copy of the files
As a lecturer, I can create discussion forums, so that students can discuss things online
As a tutor, I can print attendance sheets, so that I can take attendance during the class
You can write user stories using a physical medium or a digital tool. For example, you can use index cards or sticky notes, and arrange them on walls or tables. Alternatively, you can use a software (e.g., GitHub Project Boards, Trello, Google Docs, ...) to manage user stories digitally.
Given below are three commonly used team structures in software development. Irrespective of the team structure, it is a good practice to assign roles and responsibilities to different team members so that someone is clearly in charge of each aspect of the project. In comparison, the ‘everybody is responsible for everything’ approach can result in more chaos and hence slower progress.
Egoless team
In this structure, every team member is equal in terms of responsibility and accountability. When any decision is required, consensus must be reached. This team structure is also known as a democratic team structure. This team structure usually finds a good solution to a relatively hard problem as all team members contribute ideas.
However, the democratic nature of the team structure bears a higher risk of falling apart due to the absence of an authority figure to manage the team and resolve conflicts.
Chief programmer team
Frederick Brooks proposed that software engineers learn from the medical surgical team in an operating room. In such a team, there is always a chief surgeon, assisted by experts in other areas. Similarly, in a chief programmer team structure, there is a single authoritative figure, the chief programmer. Major decisions, e.g. system architecture, are made solely by him/her and obeyed by all other team members. The chief programmer directs and coordinates the effort of other team members. When necessary, the chief will be assisted by domain specialists e.g. business specialists, database experts, network technology experts, etc. This allows individual group members to concentrate solely on the areas in which they have sound knowledge and expertise.
The success of such a team structure relies heavily on the chief programmer. Not only must he/she be a superb technical hand, he/she also needs good managerial skills. Under a suitably qualified leader, such a team structure is known to produce successful work.
Strict hierarchy team
At the opposite extreme of an egoless team, a strict hierarchy team has a strictly defined organization among the team members, reminiscent of the military or a bureaucratic government. Each team member only works on his/her assigned tasks and reports to a single “boss”.
In a large, resource-intensive, complex project, this could be a good team structure to reduce communication overhead.
Exercises:
Which team structure is the most suitable for a school project?
Given below are three commonly used team structures in software development. Irrespective of the team structure, it is a good practice to assign roles and responsibilities to different team members so that someone is clearly in charge of each aspect of the project. In comparison, the ‘everybody is responsible for everything’ approach can result in more chaos and hence slower progress.
Egoless team
In this structure, every team member is equal in terms of responsibility and accountability. When any decision is required, consensus must be reached. This team structure is also known as a democratic team structure. This team structure usually finds a good solution to a relatively hard problem as all team members contribute ideas.
However, the democratic nature of the team structure bears a higher risk of falling apart due to the absence of an authority figure to manage the team and resolve conflicts.
Chief programmer team
Frederick Brooks proposed that software engineers learn from the medical surgical team in an operating room. In such a team, there is always a chief surgeon, assisted by experts in other areas. Similarly, in a chief programmer team structure, there is a single authoritative figure, the chief programmer. Major decisions, e.g. system architecture, are made solely by him/her and obeyed by all other team members. The chief programmer directs and coordinates the effort of other team members. When necessary, the chief will be assisted by domain specialists e.g. business specialists, database experts, network technology experts, etc. This allows individual group members to concentrate solely on the areas in which they have sound knowledge and expertise.
The success of such a team structure relies heavily on the chief programmer. Not only must he/she be a superb technical hand, he/she also needs good managerial skills. Under a suitably qualified leader, such a team structure is known to produce successful work.
Strict hierarchy team
At the opposite extreme of an egoless team, a strict hierarchy team has a strictly defined organization among the team members, reminiscent of the military or a bureaucratic government. Each team member only works on his/her assigned tasks and reports to a single “boss”.
In a large, resource-intensive, complex project, this could be a good team structure to reduce communication overhead.
Exercises:
Which team structure is the most suitable for a school project?
Given below are three commonly used team structures in software development. Irrespective of the team structure, it is a good practice to assign roles and responsibilities to different team members so that someone is clearly in charge of each aspect of the project. In comparison, the ‘everybody is responsible for everything’ approach can result in more chaos and hence slower progress.
Egoless team
In this structure, every team member is equal in terms of responsibility and accountability. When any decision is required, consensus must be reached. This team structure is also known as a democratic team structure. This team structure usually finds a good solution to a relatively hard problem as all team members contribute ideas.
However, the democratic nature of the team structure bears a higher risk of falling apart due to the absence of an authority figure to manage the team and resolve conflicts.
Chief programmer team
Frederick Brooks proposed that software engineers learn from the medical surgical team in an operating room. In such a team, there is always a chief surgeon, assisted by experts in other areas. Similarly, in a chief programmer team structure, there is a single authoritative figure, the chief programmer. Major decisions, e.g. system architecture, are made solely by him/her and obeyed by all other team members. The chief programmer directs and coordinates the effort of other team members. When necessary, the chief will be assisted by domain specialists e.g. business specialists, database experts, network technology experts, etc. This allows individual group members to concentrate solely on the areas in which they have sound knowledge and expertise.
The success of such a team structure relies heavily on the chief programmer. Not only must he/she be a superb technical hand, he/she also needs good managerial skills. Under a suitably qualified leader, such a team structure is known to produce successful work.
Strict hierarchy team
At the opposite extreme of an egoless team, a strict hierarchy team has a strictly defined organization among the team members, reminiscent of the military or a bureaucratic government. Each team member only works on his/her assigned tasks and reports to a single “boss”.
In a large, resource-intensive, complex project, this could be a good team structure to reduce communication overhead.
Exercises:
Which team structure is the most suitable for a school project?
+
Team structures
Given below are three commonly used team structures in software development. Irrespective of the team structure, it is a good practice to assign roles and responsibilities to different team members so that someone is clearly in charge of each aspect of the project. In comparison, the ‘everybody is responsible for everything’ approach can result in more chaos and hence slower progress.
Egoless team
In this structure, every team member is equal in terms of responsibility and accountability. When any decision is required, consensus must be reached. This team structure is also known as a democratic team structure. This team structure usually finds a good solution to a relatively hard problem as all team members contribute ideas.
However, the democratic nature of the team structure bears a higher risk of falling apart due to the absence of an authority figure to manage the team and resolve conflicts.
Chief programmer team
Frederick Brooks proposed that software engineers learn from the medical surgical team in an operating room. In such a team, there is always a chief surgeon, assisted by experts in other areas. Similarly, in a chief programmer team structure, there is a single authoritative figure, the chief programmer. Major decisions, e.g. system architecture, are made solely by him/her and obeyed by all other team members. The chief programmer directs and coordinates the effort of other team members. When necessary, the chief will be assisted by domain specialists e.g. business specialists, database experts, network technology experts, etc. This allows individual group members to concentrate solely on the areas in which they have sound knowledge and expertise.
The success of such a team structure relies heavily on the chief programmer. Not only must he/she be a superb technical hand, he/she also needs good managerial skills. Under a suitably qualified leader, such a team structure is known to produce successful work.
Strict hierarchy team
At the opposite extreme of an egoless team, a strict hierarchy team has a strictly defined organization among the team members, reminiscent of the military or a bureaucratic government. Each team member only works on his/her assigned tasks and reports to a single “boss”.
In a large, resource-intensive, complex project, this could be a good team structure to reduce communication overhead.
Exercises:
Which team structure is the most suitable for a school project?
Quality Assurance → Test Case Design → Boundary Value Analysis →
-
What
Boundary Value Analysis (BVA) is a test case design heuristic that is based on the observation that bugs often result from incorrect handling of boundaries of equivalence partitions. This is not surprising, as the end points of boundaries are often used in branching instructions, etc., where the programmer can make mistakes.
The markCellAt(int x, int y) operation could contain code such as if (x > 0 && x <= (W-1)) which involves the boundaries of x’s equivalence partitions.
BVA suggests that when picking test inputs from an equivalence partition, values near boundaries (i.e. boundary values) are more likely to find bugs.
Boundary values are sometimes called corner cases.
Exercises:
What BVA recommends
+
What
Boundary Value Analysis (BVA) is a test case design heuristic that is based on the observation that bugs often result from incorrect handling of boundaries of equivalence partitions. This is not surprising, as the end points of boundaries are often used in branching instructions, etc., where the programmer can make mistakes.
The markCellAt(int x, int y) operation could contain code such as if (x > 0 && x <= (W-1)) which involves the boundaries of x’s equivalence partitions.
BVA suggests that when picking test inputs from an equivalence partition, values near boundaries (i.e. boundary values) are more likely to find bugs.
Boundary values are sometimes called corner cases.
Can explain some basic test input combination strategies
Quality Assurance → Test Case Design → Combining Test Inputs →
-
Test input combination strategies
Given below are some basic strategies for generating a set of test cases by combining multiple test inputs.
Let's assume the SUT has the following three inputs and you have selected the given values for testing:
SUT: foo(char p1, int p2, boolean p3)
Values to test:
Input
Values
p1
a, b, c
p2
1, 2, 3
p3
T, F
The all combinations strategy generates test cases for each unique combination of test inputs.
This strategy generates 3x3x2=18 test cases.
Test Case
p1
p2
p3
1
a
1
T
2
a
1
F
3
a
2
T
...
...
...
...
18
c
3
F
The at least once strategy includes each test input at least once.
This strategy generates 3 test cases.
Test Case
p1
p2
p3
1
a
1
T
2
b
2
F
3
c
3
VV/IV
VV/IV = Any Valid Value / Any Invalid Value
The all pairs strategy creates test cases so that for any given pair of inputs, all combinations between them are tested. It is based on the observation that a bug is rarely the result of more than two interacting factors. The resulting number of test cases is lower than the all combinations strategy, but higher than the at least once approach.
This strategy generates 9 test cases:
See steps
Test Case
p1
p2
p3
1
a
1
T
2
a
2
T
3
a
3
F
4
b
1
F
5
b
2
T
6
b
3
F
7
c
1
T
8
c
2
F
9
c
3
T
A variation of this strategy is to test all pairs of inputs but only for inputs that could influence each other.
Testing all pairs between p1 and p3 only while ensuring all p2 values are tested at least once:
Test Case
p1
p2
p3
1
a
1
T
2
a
2
F
3
b
3
T
4
b
VV/IV
F
5
c
VV/IV
T
6
c
VV/IV
F
The random strategy generates test cases using one of the other strategies and then picks a subset randomly (presumably because the original set of test cases is too big).
There are other strategies that can be used too.
+
Test input combination strategies
Given below are some basic strategies for generating a set of test cases by combining multiple test inputs.
Let's assume the SUT has the following three inputs and you have selected the given values for testing:
SUT: foo(char p1, int p2, boolean p3)
Values to test:
Input
Values
p1
a, b, c
p2
1, 2, 3
p3
T, F
The all combinations strategy generates test cases for each unique combination of test inputs.
This strategy generates 3x3x2=18 test cases.
Test Case
p1
p2
p3
1
a
1
T
2
a
1
F
3
a
2
T
...
...
...
...
18
c
3
F
The at least once strategy includes each test input at least once.
This strategy generates 3 test cases.
Test Case
p1
p2
p3
1
a
1
T
2
b
2
F
3
c
3
VV/IV
VV/IV = Any Valid Value / Any Invalid Value
The all pairs strategy creates test cases so that for any given pair of inputs, all combinations between them are tested. It is based on the observation that a bug is rarely the result of more than two interacting factors. The resulting number of test cases is lower than the all combinations strategy, but higher than the at least once approach.
This strategy generates 9 test cases:
See steps
Test Case
p1
p2
p3
1
a
1
T
2
a
2
T
3
a
3
F
4
b
1
F
5
b
2
T
6
b
3
F
7
c
1
T
8
c
2
F
9
c
3
T
A variation of this strategy is to test all pairs of inputs but only for inputs that could influence each other.
Testing all pairs between p1 and p3 only while ensuring all p2 values are tested at least once:
Test Case
p1
p2
p3
1
a
1
T
2
a
2
F
3
b
3
T
4
b
VV/IV
F
5
c
VV/IV
T
6
c
VV/IV
F
The random strategy generates test cases using one of the other strategies and then picks a subset randomly (presumably because the original set of test cases is too big).
Can apply heuristic ‘each valid input at least once in a positive test case’
Quality Assurance → Test Case Design → Combining Test Inputs →
-
Heuristic: Each valid input at least once in a positive test case
Consider the following scenario.
SUT: printLabel(String fruitName, int unitPrice)
Selected values forfruitName (invalid values are underlined):
Values
Explanation
Apple
Label format is round
Banana
Label format is oval
Cherry
Label format is square
Dog
Not a valid fruit
Selected values forunitPrice:
Values
Explanation
1
Only one digit
20
Two digits
0
Invalid because 0 is not a valid price
-1
Invalid because negative prices are not allowed
Suppose these are the test cases being considered.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print label
2
Banana
20
Print label
3
Cherry
0
Error message “invalid price”
4
Dog
-1
Error message “invalid fruit"
It looks like the test cases were created using the at least once strategy. After running these tests, can you confirm that the square-format label printing is done correctly?
Answer: No.
Reason: Cherry -- the only input that can produce a square-format label -- is in a negative test case which produces an error message instead of a label. If there is a bug in the code that prints labels in square-format, these tests cases will not trigger that bug.
In this case, a useful heuristic to apply is each valid input must appear at least once in a positive test case. Cherry is a valid test input and you must ensure that it appears at least once in a positive test case. Here are the updated test cases after applying that heuristic.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print round label
2
Banana
20
Print oval label
2.1
Cherry
VV
Print square label
3
VV
0
Error message “invalid price”
4
Dog
-1
Error message “invalid fruit"
VV/IV = Any Invalid or Valid Value VV = Any Valid Value
+
Heuristic: Each valid input at least once in a positive test case
Consider the following scenario.
SUT: printLabel(String fruitName, int unitPrice)
Selected values forfruitName (invalid values are underlined):
Values
Explanation
Apple
Label format is round
Banana
Label format is oval
Cherry
Label format is square
Dog
Not a valid fruit
Selected values forunitPrice:
Values
Explanation
1
Only one digit
20
Two digits
0
Invalid because 0 is not a valid price
-1
Invalid because negative prices are not allowed
Suppose these are the test cases being considered.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print label
2
Banana
20
Print label
3
Cherry
0
Error message “invalid price”
4
Dog
-1
Error message “invalid fruit"
It looks like the test cases were created using the at least once strategy. After running these tests, can you confirm that the square-format label printing is done correctly?
Answer: No.
Reason: Cherry -- the only input that can produce a square-format label -- is in a negative test case which produces an error message instead of a label. If there is a bug in the code that prints labels in square-format, these tests cases will not trigger that bug.
In this case, a useful heuristic to apply is each valid input must appear at least once in a positive test case. Cherry is a valid test input and you must ensure that it appears at least once in a positive test case. Here are the updated test cases after applying that heuristic.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print round label
2
Banana
20
Print oval label
2.1
Cherry
VV
Print square label
3
VV
0
Error message “invalid price”
4
Dog
-1
Error message “invalid fruit"
VV/IV = Any Invalid or Valid Value VV = Any Valid Value
Can explain the need for strategies to combine test inputs
An SUT can take multiple inputs. You can select values for each input (using equivalence partitioning, boundary value analysis, or some other technique).
An SUT that takes multiple inputs and some values chosen for each input:
Method to test: calculateGrade(participation, projectGrade, isAbsent, examScore)
Values to test:
Input
Valid values to test
Invalid values to test
participation
0, 1, 19, 20
21, 22
projectGrade
A, B, C, D, F
isAbsent
true, false
examScore
0, 1, 69, 70,
71, 72
Testing all possible combinations is effective but not efficient. If you test all possible combinations for the above example, you need to test 6x5x2x6=360 cases. Doing so has a higher chance of discovering bugs (i.e. effective) but the number of test cases will be too high (i.e. not efficient). Therefore, you need smarter ways to combine test inputs that are both effective and efficient.
Test input combination strategies
Can explain some basic test input combination strategies
Given below are some basic strategies for generating a set of test cases by combining multiple test inputs.
Let's assume the SUT has the following three inputs and you have selected the given values for testing:
SUT: foo(char p1, int p2, boolean p3)
Values to test:
Input
Values
p1
a, b, c
p2
1, 2, 3
p3
T, F
The all combinations strategy generates test cases for each unique combination of test inputs.
This strategy generates 3x3x2=18 test cases.
Test Case
p1
p2
p3
1
a
1
T
2
a
1
F
3
a
2
T
...
...
...
...
18
c
3
F
The at least once strategy includes each test input at least once.
This strategy generates 3 test cases.
Test Case
p1
p2
p3
1
a
1
T
2
b
2
F
3
c
3
VV/IV
VV/IV = Any Valid Value / Any Invalid Value
The all pairs strategy creates test cases so that for any given pair of inputs, all combinations between them are tested. It is based on the observation that a bug is rarely the result of more than two interacting factors. The resulting number of test cases is lower than the all combinations strategy, but higher than the at least once approach.
This strategy generates 9 test cases:
See steps
Test Case
p1
p2
p3
1
a
1
T
2
a
2
T
3
a
3
F
4
b
1
F
5
b
2
T
6
b
3
F
7
c
1
T
8
c
2
F
9
c
3
T
A variation of this strategy is to test all pairs of inputs but only for inputs that could influence each other.
Testing all pairs between p1 and p3 only while ensuring all p2 values are tested at least once:
Test Case
p1
p2
p3
1
a
1
T
2
a
2
F
3
b
3
T
4
b
VV/IV
F
5
c
VV/IV
T
6
c
VV/IV
F
The random strategy generates test cases using one of the other strategies and then picks a subset randomly (presumably because the original set of test cases is too big).
There are other strategies that can be used too.
Heuristic: Each valid input at least once in a positive test case
Can apply heuristic ‘each valid input at least once in a positive test case’
Consider the following scenario.
SUT: printLabel(String fruitName, int unitPrice)
Selected values forfruitName (invalid values are underlined):
Values
Explanation
Apple
Label format is round
Banana
Label format is oval
Cherry
Label format is square
Dog
Not a valid fruit
Selected values forunitPrice:
Values
Explanation
1
Only one digit
20
Two digits
0
Invalid because 0 is not a valid price
-1
Invalid because negative prices are not allowed
Suppose these are the test cases being considered.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print label
2
Banana
20
Print label
3
Cherry
0
Error message “invalid price”
4
Dog
-1
Error message “invalid fruit"
It looks like the test cases were created using the at least once strategy. After running these tests, can you confirm that the square-format label printing is done correctly?
Answer: No.
Reason: Cherry -- the only input that can produce a square-format label -- is in a negative test case which produces an error message instead of a label. If there is a bug in the code that prints labels in square-format, these tests cases will not trigger that bug.
In this case, a useful heuristic to apply is each valid input must appear at least once in a positive test case. Cherry is a valid test input and you must ensure that it appears at least once in a positive test case. Here are the updated test cases after applying that heuristic.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print round label
2
Banana
20
Print oval label
2.1
Cherry
VV
Print square label
3
VV
0
Error message “invalid price”
4
Dog
-1
Error message “invalid fruit"
VV/IV = Any Invalid or Valid Value VV = Any Valid Value
Heuristic: No more than one invalid input in a test case
Can apply heuristic ‘no more than one invalid input in a test case’
Consider the test cases designed in [Heuristic: each valid input at least once in a positive test case].
After running these test cases, can you be sure that the error message “invalid price” is shown for negative prices?
Answer: No.
Reason: -1 -- the only input that is a negative price -– is in a test case that produces the error message “invalid fruit”.
In this case, a useful heuristic to apply is no more than one invalid input in a test case. After applying that, you get the following test cases.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print round label
2
Banana
20
Print oval label
2.1
Cherry
VV
Print square label
3
VV
0
Error message “invalid price”
4
VV
-1
Error message “invalid price"
4.1
Dog
VV
Error message “invalid fruit"
VV/IV = Any Invalid or Valid Value VV = Any Valid Value
Exercises:
Can define test cases precisely
Mix
Can apply multiple test input combination techniques together
To get the first cut of test cases, let’s apply the at least once strategy.
Test cases for calculateGrade V1
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV/IV
69
...
4
20
D
VV/IV
70
...
5
21
F
VV/IV
71
Err Msg
6
22
VV/IV
VV/IV
72
Err Msg
VV/IV = Any Valid or Invalid Value, Err Msg = Error Message
Next, let’s apply the each valid input at least once in a positive test case heuristic. Test case 5 has a valid value for projectGrade=F that doesn't appear in any other positive test case. Let's replace test case 5 with 5.1 and 5.2 to rectify that.
Test cases for calculateGrade V2
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV
69
...
4
20
D
VV
70
...
5.1
VV
F
VV
VV
...
5.2
21
VV/IV
VV/IV
71
Err Msg
6
22
VV/IV
VV/IV
72
Err Msg
VV = Any Valid Value VV/IV = Any Valid or Invalid Value
Next, you have to apply the no more than one invalid input in a test case heuristic. Test cases 5.2 and 6 don't follow that heuristic. Let's rectify the situation as follows:
Test cases for calculateGrade V3
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV
69
...
4
20
D
VV
70
...
5.1
VV
F
VV
VV
...
5.2
21
VV
VV
VV
Err Msg
5.3
22
VV
VV
VV
Err Msg
6.1
VV
VV
VV
71
Err Msg
6.2
VV
VV
VV
72
Err Msg
Next, you can assume that there is a dependency between the inputs examScore and isAbsent such that an absent student can only have examScore=0. To cater for the hidden invalid case arising from this, you can add a new test case where isAbsent=true and examScore!=0. In addition, test cases 3-6.2 should have isAbsent=false so that the input remains valid.
Test cases for calculateGrade V4
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
false
69
...
4
20
D
false
70
...
5.1
VV
F
false
VV
...
5.2
21
VV
false
VV
Err Msg
5.3
22
VV
false
VV
Err Msg
6.1
VV
VV
false
71
Err Msg
6.2
VV
VV
false
72
Err Msg
7
VV
VV
true
!=0
Err Msg
Exercises:
Statements about test input combinations
Combine test inputs for the consume method
+
Combining test inputs
Why
Can explain the need for strategies to combine test inputs
An SUT can take multiple inputs. You can select values for each input (using equivalence partitioning, boundary value analysis, or some other technique).
An SUT that takes multiple inputs and some values chosen for each input:
Method to test: calculateGrade(participation, projectGrade, isAbsent, examScore)
Values to test:
Input
Valid values to test
Invalid values to test
participation
0, 1, 19, 20
21, 22
projectGrade
A, B, C, D, F
isAbsent
true, false
examScore
0, 1, 69, 70,
71, 72
Testing all possible combinations is effective but not efficient. If you test all possible combinations for the above example, you need to test 6x5x2x6=360 cases. Doing so has a higher chance of discovering bugs (i.e. effective) but the number of test cases will be too high (i.e. not efficient). Therefore, you need smarter ways to combine test inputs that are both effective and efficient.
Test input combination strategies
Can explain some basic test input combination strategies
Given below are some basic strategies for generating a set of test cases by combining multiple test inputs.
Let's assume the SUT has the following three inputs and you have selected the given values for testing:
SUT: foo(char p1, int p2, boolean p3)
Values to test:
Input
Values
p1
a, b, c
p2
1, 2, 3
p3
T, F
The all combinations strategy generates test cases for each unique combination of test inputs.
This strategy generates 3x3x2=18 test cases.
Test Case
p1
p2
p3
1
a
1
T
2
a
1
F
3
a
2
T
...
...
...
...
18
c
3
F
The at least once strategy includes each test input at least once.
This strategy generates 3 test cases.
Test Case
p1
p2
p3
1
a
1
T
2
b
2
F
3
c
3
VV/IV
VV/IV = Any Valid Value / Any Invalid Value
The all pairs strategy creates test cases so that for any given pair of inputs, all combinations between them are tested. It is based on the observation that a bug is rarely the result of more than two interacting factors. The resulting number of test cases is lower than the all combinations strategy, but higher than the at least once approach.
This strategy generates 9 test cases:
See steps
Test Case
p1
p2
p3
1
a
1
T
2
a
2
T
3
a
3
F
4
b
1
F
5
b
2
T
6
b
3
F
7
c
1
T
8
c
2
F
9
c
3
T
A variation of this strategy is to test all pairs of inputs but only for inputs that could influence each other.
Testing all pairs between p1 and p3 only while ensuring all p2 values are tested at least once:
Test Case
p1
p2
p3
1
a
1
T
2
a
2
F
3
b
3
T
4
b
VV/IV
F
5
c
VV/IV
T
6
c
VV/IV
F
The random strategy generates test cases using one of the other strategies and then picks a subset randomly (presumably because the original set of test cases is too big).
There are other strategies that can be used too.
Heuristic: Each valid input at least once in a positive test case
Can apply heuristic ‘each valid input at least once in a positive test case’
Consider the following scenario.
SUT: printLabel(String fruitName, int unitPrice)
Selected values forfruitName (invalid values are underlined):
Values
Explanation
Apple
Label format is round
Banana
Label format is oval
Cherry
Label format is square
Dog
Not a valid fruit
Selected values forunitPrice:
Values
Explanation
1
Only one digit
20
Two digits
0
Invalid because 0 is not a valid price
-1
Invalid because negative prices are not allowed
Suppose these are the test cases being considered.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print label
2
Banana
20
Print label
3
Cherry
0
Error message “invalid price”
4
Dog
-1
Error message “invalid fruit"
It looks like the test cases were created using the at least once strategy. After running these tests, can you confirm that the square-format label printing is done correctly?
Answer: No.
Reason: Cherry -- the only input that can produce a square-format label -- is in a negative test case which produces an error message instead of a label. If there is a bug in the code that prints labels in square-format, these tests cases will not trigger that bug.
In this case, a useful heuristic to apply is each valid input must appear at least once in a positive test case. Cherry is a valid test input and you must ensure that it appears at least once in a positive test case. Here are the updated test cases after applying that heuristic.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print round label
2
Banana
20
Print oval label
2.1
Cherry
VV
Print square label
3
VV
0
Error message “invalid price”
4
Dog
-1
Error message “invalid fruit"
VV/IV = Any Invalid or Valid Value VV = Any Valid Value
Heuristic: No more than one invalid input in a test case
Can apply heuristic ‘no more than one invalid input in a test case’
Consider the test cases designed in [Heuristic: each valid input at least once in a positive test case].
After running these test cases, can you be sure that the error message “invalid price” is shown for negative prices?
Answer: No.
Reason: -1 -- the only input that is a negative price -– is in a test case that produces the error message “invalid fruit”.
In this case, a useful heuristic to apply is no more than one invalid input in a test case. After applying that, you get the following test cases.
Case
fruitName
unitPrice
Expected
1
Apple
1
Print round label
2
Banana
20
Print oval label
2.1
Cherry
VV
Print square label
3
VV
0
Error message “invalid price”
4
VV
-1
Error message “invalid price"
4.1
Dog
VV
Error message “invalid fruit"
VV/IV = Any Invalid or Valid Value VV = Any Valid Value
Exercises:
Can define test cases precisely
Mix
Can apply multiple test input combination techniques together
To get the first cut of test cases, let’s apply the at least once strategy.
Test cases for calculateGrade V1
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV/IV
69
...
4
20
D
VV/IV
70
...
5
21
F
VV/IV
71
Err Msg
6
22
VV/IV
VV/IV
72
Err Msg
VV/IV = Any Valid or Invalid Value, Err Msg = Error Message
Next, let’s apply the each valid input at least once in a positive test case heuristic. Test case 5 has a valid value for projectGrade=F that doesn't appear in any other positive test case. Let's replace test case 5 with 5.1 and 5.2 to rectify that.
Test cases for calculateGrade V2
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV
69
...
4
20
D
VV
70
...
5.1
VV
F
VV
VV
...
5.2
21
VV/IV
VV/IV
71
Err Msg
6
22
VV/IV
VV/IV
72
Err Msg
VV = Any Valid Value VV/IV = Any Valid or Invalid Value
Next, you have to apply the no more than one invalid input in a test case heuristic. Test cases 5.2 and 6 don't follow that heuristic. Let's rectify the situation as follows:
Test cases for calculateGrade V3
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV
69
...
4
20
D
VV
70
...
5.1
VV
F
VV
VV
...
5.2
21
VV
VV
VV
Err Msg
5.3
22
VV
VV
VV
Err Msg
6.1
VV
VV
VV
71
Err Msg
6.2
VV
VV
VV
72
Err Msg
Next, you can assume that there is a dependency between the inputs examScore and isAbsent such that an absent student can only have examScore=0. To cater for the hidden invalid case arising from this, you can add a new test case where isAbsent=true and examScore!=0. In addition, test cases 3-6.2 should have isAbsent=false so that the input remains valid.
To get the first cut of test cases, let’s apply the at least once strategy.
Test cases for calculateGrade V1
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV/IV
69
...
4
20
D
VV/IV
70
...
5
21
F
VV/IV
71
Err Msg
6
22
VV/IV
VV/IV
72
Err Msg
VV/IV = Any Valid or Invalid Value, Err Msg = Error Message
Next, let’s apply the each valid input at least once in a positive test case heuristic. Test case 5 has a valid value for projectGrade=F that doesn't appear in any other positive test case. Let's replace test case 5 with 5.1 and 5.2 to rectify that.
Test cases for calculateGrade V2
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV
69
...
4
20
D
VV
70
...
5.1
VV
F
VV
VV
...
5.2
21
VV/IV
VV/IV
71
Err Msg
6
22
VV/IV
VV/IV
72
Err Msg
VV = Any Valid Value VV/IV = Any Valid or Invalid Value
Next, you have to apply the no more than one invalid input in a test case heuristic. Test cases 5.2 and 6 don't follow that heuristic. Let's rectify the situation as follows:
Test cases for calculateGrade V3
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV
69
...
4
20
D
VV
70
...
5.1
VV
F
VV
VV
...
5.2
21
VV
VV
VV
Err Msg
5.3
22
VV
VV
VV
Err Msg
6.1
VV
VV
VV
71
Err Msg
6.2
VV
VV
VV
72
Err Msg
Next, you can assume that there is a dependency between the inputs examScore and isAbsent such that an absent student can only have examScore=0. To cater for the hidden invalid case arising from this, you can add a new test case where isAbsent=true and examScore!=0. In addition, test cases 3-6.2 should have isAbsent=false so that the input remains valid.
To get the first cut of test cases, let’s apply the at least once strategy.
Test cases for calculateGrade V1
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV/IV
69
...
4
20
D
VV/IV
70
...
5
21
F
VV/IV
71
Err Msg
6
22
VV/IV
VV/IV
72
Err Msg
VV/IV = Any Valid or Invalid Value, Err Msg = Error Message
Next, let’s apply the each valid input at least once in a positive test case heuristic. Test case 5 has a valid value for projectGrade=F that doesn't appear in any other positive test case. Let's replace test case 5 with 5.1 and 5.2 to rectify that.
Test cases for calculateGrade V2
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV
69
...
4
20
D
VV
70
...
5.1
VV
F
VV
VV
...
5.2
21
VV/IV
VV/IV
71
Err Msg
6
22
VV/IV
VV/IV
72
Err Msg
VV = Any Valid Value VV/IV = Any Valid or Invalid Value
Next, you have to apply the no more than one invalid input in a test case heuristic. Test cases 5.2 and 6 don't follow that heuristic. Let's rectify the situation as follows:
Test cases for calculateGrade V3
Case No.
participation
projectGrade
isAbsent
examScore
Expected
1
0
A
true
0
...
2
1
B
false
1
...
3
19
C
VV
69
...
4
20
D
VV
70
...
5.1
VV
F
VV
VV
...
5.2
21
VV
VV
VV
Err Msg
5.3
22
VV
VV
VV
Err Msg
6.1
VV
VV
VV
71
Err Msg
6.2
VV
VV
VV
72
Err Msg
Next, you can assume that there is a dependency between the inputs examScore and isAbsent such that an absent student can only have examScore=0. To cater for the hidden invalid case arising from this, you can add a new test case where isAbsent=true and examScore!=0. In addition, test cases 3-6.2 should have isAbsent=false so that the input remains valid.
Can explain the need for strategies to combine test inputs
Quality Assurance → Test Case Design → Combining Test Inputs →
-
Why
An SUT can take multiple inputs. You can select values for each input (using equivalence partitioning, boundary value analysis, or some other technique).
An SUT that takes multiple inputs and some values chosen for each input:
Method to test: calculateGrade(participation, projectGrade, isAbsent, examScore)
Values to test:
Input
Valid values to test
Invalid values to test
participation
0, 1, 19, 20
21, 22
projectGrade
A, B, C, D, F
isAbsent
true, false
examScore
0, 1, 69, 70,
71, 72
Testing all possible combinations is effective but not efficient. If you test all possible combinations for the above example, you need to test 6x5x2x6=360 cases. Doing so has a higher chance of discovering bugs (i.e. effective) but the number of test cases will be too high (i.e. not efficient). Therefore, you need smarter ways to combine test inputs that are both effective and efficient.
+
Why
An SUT can take multiple inputs. You can select values for each input (using equivalence partitioning, boundary value analysis, or some other technique).
An SUT that takes multiple inputs and some values chosen for each input:
Method to test: calculateGrade(participation, projectGrade, isAbsent, examScore)
Values to test:
Input
Valid values to test
Invalid values to test
participation
0, 1, 19, 20
21, 22
projectGrade
A, B, C, D, F
isAbsent
true, false
examScore
0, 1, 69, 70,
71, 72
Testing all possible combinations is effective but not efficient. If you test all possible combinations for the above example, you need to test 6x5x2x6=360 cases. Doing so has a higher chance of discovering bugs (i.e. effective) but the number of test cases will be too high (i.e. not efficient). Therefore, you need smarter ways to combine test inputs that are both effective and efficient.
Can explain black box and glass box test case design
Quality Assurance → Test Case Design → Introduction →
-
Black box versus glass box
Test case design can be of three types, based on how much of the SUT's internal details are considered when designing test cases:
Black-box (aka specification-based or responsibility-based) approach: test cases are designed exclusively based on the SUT’s specified external behavior.
White-box (aka glass-box or structured or implementation-based) approach: test cases are designed based on what is known about the SUT’s implementation, i.e. the code.
Gray-box approach: test case design uses some important information about the implementation. For example, if the implementation of a sort operation uses different algorithms to sort lists shorter than 1000 items and lists longer than 1000 items, more meaningful test cases can then be added to verify the correctness of both algorithms.
Black-box and white-box testing
+
Black box versus glass box
Test case design can be of three types, based on how much of the SUT's internal details are considered when designing test cases:
Black-box (aka specification-based or responsibility-based) approach: test cases are designed exclusively based on the SUT’s specified external behavior.
White-box (aka glass-box or structured or implementation-based) approach: test cases are designed based on what is known about the SUT’s implementation, i.e. the code.
Gray-box approach: test case design uses some important information about the implementation. For example, if the implementation of a sort operation uses different algorithms to sort lists shorter than 1000 items and lists longer than 1000 items, more meaningful test cases can then be added to verify the correctness of both algorithms.
Quality Assurance → Test Case Design → Introduction →
-
Positive versus negative test cases
A positive test case is when the test is designed to produce an expected/valid behavior. On the other hand, a negative test case is designed to produce a behavior that indicates an invalid/unexpected situation, such as an error message.
Consider the testing of the method print(Integer i) which prints the value of i.
A positive test case: i == new Integer(50);
A negative test case: i == null;
+
Positive versus negative test cases
A positive test case is when the test is designed to produce an expected/valid behavior. On the other hand, a negative test case is designed to produce a behavior that indicates an invalid/unexpected situation, such as an error message.
Consider the testing of the method print(Integer i) which prints the value of i.
Can explain test case design for use case based testing
Use cases can be used for system testing and acceptance testing. For example, the main success scenario can be one test case while each variation (due to extensions) can form another test case. However, note that use cases do not specify the exact data entered into the system. Instead, it might say something like user enters his personal data into the system. Therefore, the tester has to choose data by considering equivalence partitions and boundary values. The combinations of these could result in one use case producing many test cases.
To increase the E&E of testing, high-priority use cases are given more attention. For example, a scripted approach can be used to test high-priority test cases, while an exploratory approach is used to test other areas of concern that could emerge during testing.
+
More
Testing based on use cases
Can explain test case design for use case based testing
Use cases can be used for system testing and acceptance testing. For example, the main success scenario can be one test case while each variation (due to extensions) can form another test case. However, note that use cases do not specify the exact data entered into the system. Instead, it might say something like user enters his personal data into the system. Therefore, the tester has to choose data by considering equivalence partitions and boundary values. The combinations of these could result in one use case producing many test cases.
To increase the E&E of testing, high-priority use cases are given more attention. For example, a scripted approach can be used to test high-priority test cases, while an exploratory approach is used to test other areas of concern that could emerge during testing.
Can explain test case design for use case based testing
Quality Assurance → Test Case Design →
-
Testing based on use cases
Use cases can be used for system testing and acceptance testing. For example, the main success scenario can be one test case while each variation (due to extensions) can form another test case. However, note that use cases do not specify the exact data entered into the system. Instead, it might say something like user enters his personal data into the system. Therefore, the tester has to choose data by considering equivalence partitions and boundary values. The combinations of these could result in one use case producing many test cases.
To increase the E&E of testing, high-priority use cases are given more attention. For example, a scripted approach can be used to test high-priority test cases, while an exploratory approach is used to test other areas of concern that could emerge during testing.
+
Testing based on use cases
Use cases can be used for system testing and acceptance testing. For example, the main success scenario can be one test case while each variation (due to extensions) can form another test case. However, note that use cases do not specify the exact data entered into the system. Instead, it might say something like user enters his personal data into the system. Therefore, the tester has to choose data by considering equivalence partitions and boundary values. The combinations of these could result in one use case producing many test cases.
To increase the E&E of testing, high-priority use cases are given more attention. For example, a scripted approach can be used to test high-priority test cases, while an exploratory approach is used to test other areas of concern that could emerge during testing.
Dependency injection is the process of 'injecting' objects to replace current dependencies with a different object. This is often used to inject stubs to isolate the from its so that it can be tested in isolation.
A Foo object normally depends on a Bar object, but you can inject a BarStub object so that the Foo object no longer depends on a Bar object. Now you can test the Foo object in isolation from the Bar object.
+
What
Dependency injection is the process of 'injecting' objects to replace current dependencies with a different object. This is often used to inject stubs to isolate the from its so that it can be tested in isolation.
A Foo object normally depends on a Bar object, but you can inject a BarStub object so that the Foo object no longer depends on a Bar object. Now you can test the Foo object in isolation from the Bar object.
Testing: Operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. –- source: IEEE
When testing, you execute a set of test cases. A test case specifies how to perform a test. At a minimum, it specifies the input to the software under test (SUT) and the expected behavior.
Example: A minimal test case for testing a browser:
Input – Start the browser using a blank page (vertical scrollbar disabled). Then, load longfile.html located in the test data folder.
Expected behavior – The scrollbar should be automatically enabled upon loading longfile.html.
Test cases can be determined based on the specification, reviewing similar existing systems, or comparing to the past behavior of the SUT.
Other details a test case can contain ... extra
For each test case you should do the following:
Feed the input to the SUT
Observe the actual output
Compare actual output with the expected output
A test case failure is a mismatch between the expected behavior and the actual behavior. A failure indicates a potential defect (or a bug), unless the error is in the test case itself.
Example: In the browser example above, a test case failure is implied if the scrollbar remains disabled after loading longfile.html. The defect/bug causing that failure could be an uninitialized variable.
A deeper look at the definition of testing extra
Testability
Can explain testability
Testability is an indication of how easy it is to test an SUT. As testability depends a lot on the design and implementation, you should try to increase the testability when you design and implement software. The higher the testability, the easier it is to achieve better quality software.
+
Introduction
What
Can explain testing
Testing: Operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. –- source: IEEE
When testing, you execute a set of test cases. A test case specifies how to perform a test. At a minimum, it specifies the input to the software under test (SUT) and the expected behavior.
Example: A minimal test case for testing a browser:
Input – Start the browser using a blank page (vertical scrollbar disabled). Then, load longfile.html located in the test data folder.
Expected behavior – The scrollbar should be automatically enabled upon loading longfile.html.
Test cases can be determined based on the specification, reviewing similar existing systems, or comparing to the past behavior of the SUT.
Other details a test case can contain ... extra
For each test case you should do the following:
Feed the input to the SUT
Observe the actual output
Compare actual output with the expected output
A test case failure is a mismatch between the expected behavior and the actual behavior. A failure indicates a potential defect (or a bug), unless the error is in the test case itself.
Example: In the browser example above, a test case failure is implied if the scrollbar remains disabled after loading longfile.html. The defect/bug causing that failure could be an uninitialized variable.
A deeper look at the definition of testing extra
Testability
Can explain testability
Testability is an indication of how easy it is to test an SUT. As testability depends a lot on the design and implementation, you should try to increase the testability when you design and implement software. The higher the testability, the easier it is to achieve better quality software.
Testability is an indication of how easy it is to test an SUT. As testability depends a lot on the design and implementation, you should try to increase the testability when you design and implement software. The higher the testability, the easier it is to achieve better quality software.
+
Testability
Testability is an indication of how easy it is to test an SUT. As testability depends a lot on the design and implementation, you should try to increase the testability when you design and implement software. The higher the testability, the easier it is to achieve better quality software.
Testing: Operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. –- source: IEEE
When testing, you execute a set of test cases. A test case specifies how to perform a test. At a minimum, it specifies the input to the software under test (SUT) and the expected behavior.
Example: A minimal test case for testing a browser:
Input – Start the browser using a blank page (vertical scrollbar disabled). Then, load longfile.html located in the test data folder.
Expected behavior – The scrollbar should be automatically enabled upon loading longfile.html.
Test cases can be determined based on the specification, reviewing similar existing systems, or comparing to the past behavior of the SUT.
Other details a test case can contain ... extra
For each test case you should do the following:
Feed the input to the SUT
Observe the actual output
Compare actual output with the expected output
A test case failure is a mismatch between the expected behavior and the actual behavior. A failure indicates a potential defect (or a bug), unless the error is in the test case itself.
Example: In the browser example above, a test case failure is implied if the scrollbar remains disabled after loading longfile.html. The defect/bug causing that failure could be an uninitialized variable.
A deeper look at the definition of testing extra
+
What
Testing: Operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. –- source: IEEE
When testing, you execute a set of test cases. A test case specifies how to perform a test. At a minimum, it specifies the input to the software under test (SUT) and the expected behavior.
Example: A minimal test case for testing a browser:
Input – Start the browser using a blank page (vertical scrollbar disabled). Then, load longfile.html located in the test data folder.
Expected behavior – The scrollbar should be automatically enabled upon loading longfile.html.
Test cases can be determined based on the specification, reviewing similar existing systems, or comparing to the past behavior of the SUT.
Other details a test case can contain ... extra
For each test case you should do the following:
Feed the input to the SUT
Observe the actual output
Compare actual output with the expected output
A test case failure is a mismatch between the expected behavior and the actual behavior. A failure indicates a potential defect (or a bug), unless the error is in the test case itself.
Example: In the browser example above, a test case failure is implied if the scrollbar remains disabled after loading longfile.html. The defect/bug causing that failure could be an uninitialized variable.
Quality Assurance → Testing → Test-Driven Development →
-
What
Test-Driven Development(TDD) advocates writing the tests before writing the SUT, while evolving functionality and tests in small increments. In TDD you first define the precise behavior of the SUT using test code, and then update the SUT to match the specified behavior. While TDD has its fair share of detractors, there are many who consider it a good way to reduce defects. One big advantage of TDD is that it guarantees the code is testable.
Exercises:
When do we write tests in TDD?
+
What
Test-Driven Development(TDD) advocates writing the tests before writing the SUT, while evolving functionality and tests in small increments. In TDD you first define the precise behavior of the SUT using test code, and then update the SUT to match the specified behavior. While TDD has its fair share of detractors, there are many who consider it a good way to reduce defects. One big advantage of TDD is that it guarantees the code is testable.
If a software product has a GUI (Graphical User Interface) component, all product-level testing (i.e. the types of testing mentioned above) need to be done using the GUI. However, testing the GUI is much harder than testing the CLI (Command Line Interface) or API, for the following reasons:
Most GUIs can support a large number of different operations, many of which can be performed in any arbitrary order.
GUI operations are more difficult to automate than API testing. Reliably automating GUI operations and automatically verifying whether the GUI behaves as expected is harder than calling an operation and comparing its return value with an expected value. Therefore, automated regression testing of GUIs is rather difficult.
The appearance of a GUI (and sometimes even behavior) can be different across platforms and even environments. For example, a GUI can behave differently based on whether it is minimized or maximized, in focus or out of focus, and in a high resolution display or a low resolution display.
Moving as much logic as possible out of the GUI can make GUI testing easier. That way, you can bypass the GUI to test the rest of the system using automated API testing. While this still requires the GUI to be tested, the number of such test cases can be reduced as most of the system will have been tested using automated API testing.
There are testing tools that can automate GUI testing.
Some tools used for automated GUI testing:
TestFX can do automated testing of JavaFX GUIs
Visual Studio supports the ‘record replay’ type of GUI test automation.
Selenium can be used to automate testing of web application UIs
Demo video of automated testing of a web application
Exercises:
API testing vs GUI testing
+
Automated testing of GUIs
If a software product has a GUI (Graphical User Interface) component, all product-level testing (i.e. the types of testing mentioned above) need to be done using the GUI. However, testing the GUI is much harder than testing the CLI (Command Line Interface) or API, for the following reasons:
Most GUIs can support a large number of different operations, many of which can be performed in any arbitrary order.
GUI operations are more difficult to automate than API testing. Reliably automating GUI operations and automatically verifying whether the GUI behaves as expected is harder than calling an operation and comparing its return value with an expected value. Therefore, automated regression testing of GUIs is rather difficult.
The appearance of a GUI (and sometimes even behavior) can be different across platforms and even environments. For example, a GUI can behave differently based on whether it is minimized or maximized, in focus or out of focus, and in a high resolution display or a low resolution display.
Moving as much logic as possible out of the GUI can make GUI testing easier. That way, you can bypass the GUI to test the rest of the system using automated API testing. While this still requires the GUI to be tested, the number of such test cases can be reduced as most of the system will have been tested using automated API testing.
There are testing tools that can automate GUI testing.
Some tools used for automated GUI testing:
TestFX can do automated testing of JavaFX GUIs
Visual Studio supports the ‘record replay’ type of GUI test automation.
Selenium can be used to automate testing of web application UIs
Demo video of automated testing of a web application
An automated test case can be run programmatically and the result of the test case (pass or fail) is determined programmatically. Compared to manual testing, automated testing reduces the effort required to run tests repeatedly and increases precision of testing (because manual testing is susceptible to human errors).
An automated test case can be run programmatically and the result of the test case (pass or fail) is determined programmatically. Compared to manual testing, automated testing reduces the effort required to run tests repeatedly and increases precision of testing (because manual testing is susceptible to human errors).
Measuring coverage is often done using coverage analysis tools. Most IDEs have inbuilt support for measuring test coverage, or at least have plugins that can measure test coverage.
Coverage analysis can be useful in improving the quality of testinge.g., if a set of test cases does not achieve 100% branch coverage, more test cases can be added to cover missed branches.
Measuring code coverage in IntelliJ IDEA (watch from 4 minutes 50 seconds mark)
+
How
Measuring coverage is often done using coverage analysis tools. Most IDEs have inbuilt support for measuring test coverage, or at least have plugins that can measure test coverage.
Coverage analysis can be useful in improving the quality of testinge.g., if a set of test cases does not achieve 100% branch coverage, more test cases can be added to cover missed branches.
Measuring code coverage in IntelliJ IDEA (watch from 4 minutes 50 seconds mark)
Acceptance testing comes after system testing. Similar to system testing, acceptance testing involves testing the whole system.
Some differences between system testing and acceptance testing:
System Testing
Acceptance Testing
Done against the system specification
Done against the requirements specification
Done by testers of the project team
Done by a team that represents the customer
Done on the development environment or a test bed
Done on the deployment site or on a close simulation of the deployment site
Both negative and positive test cases
More focus on positive test cases
Note: negative test cases: cases where the SUT is not expected to work normally e.g. incorrect inputs; positive test cases: cases where the SUT is expected to work normally
Requirement specification versus system specification
The requirement specification need not be the same as the system specification. Some example differences:
Requirements specification
System specification
limited to how the system behaves in normal working conditions
can also include details on how it will fail gracefully when pushed beyond limits, how to recover, etc. specification
written in terms of problems that need to be solved (e.g. provide a method to locate an email quickly)
written in terms of how the system solves those problems (e.g. explain the email search feature)
specifies the interface available for intended end-users
could contain additional APIs not available for end-users (for the use of developers/testers)
However, in many cases one document serves as both a requirement specification and a system specification.
Passing system tests does not necessarily mean passing acceptance testing. Some examples:
The system might work on the testbed environments but might not work the same way in the deployment environment, due to subtle differences between the two environments.
The system might conform to the system specification but could fail to solve the problem it was supposed to solve for the user, due to flaws in the system design.
Exercises:
Statements about system testing and acceptance testing
+
Acceptance versus system testing
Acceptance testing comes after system testing. Similar to system testing, acceptance testing involves testing the whole system.
Some differences between system testing and acceptance testing:
System Testing
Acceptance Testing
Done against the system specification
Done against the requirements specification
Done by testers of the project team
Done by a team that represents the customer
Done on the development environment or a test bed
Done on the deployment site or on a close simulation of the deployment site
Both negative and positive test cases
More focus on positive test cases
Note: negative test cases: cases where the SUT is not expected to work normally e.g. incorrect inputs; positive test cases: cases where the SUT is expected to work normally
Requirement specification versus system specification
The requirement specification need not be the same as the system specification. Some example differences:
Requirements specification
System specification
limited to how the system behaves in normal working conditions
can also include details on how it will fail gracefully when pushed beyond limits, how to recover, etc. specification
written in terms of problems that need to be solved (e.g. provide a method to locate an email quickly)
written in terms of how the system solves those problems (e.g. explain the email search feature)
specifies the interface available for intended end-users
could contain additional APIs not available for end-users (for the use of developers/testers)
However, in many cases one document serves as both a requirement specification and a system specification.
Passing system tests does not necessarily mean passing acceptance testing. Some examples:
The system might work on the testbed environments but might not work the same way in the deployment environment, due to subtle differences between the two environments.
The system might conform to the system specification but could fail to solve the problem it was supposed to solve for the user, due to flaws in the system design.
Exercises:
Statements about system testing and acceptance testing
Acceptance testing (aka User Acceptance Testing (UAT): test the system to ensure it meets the user requirements.
Acceptance tests give an assurance to the customer that the system does what it is intended to do. Acceptance test cases are often defined at the beginning of the project, usually based on the use case specification. Successful completion of UAT is often a prerequisite to the project sign-off.
Acceptance versus system testing
Can explain the differences between system testing and acceptance testing
Acceptance testing comes after system testing. Similar to system testing, acceptance testing involves testing the whole system.
Some differences between system testing and acceptance testing:
System Testing
Acceptance Testing
Done against the system specification
Done against the requirements specification
Done by testers of the project team
Done by a team that represents the customer
Done on the development environment or a test bed
Done on the deployment site or on a close simulation of the deployment site
Both negative and positive test cases
More focus on positive test cases
Note: negative test cases: cases where the SUT is not expected to work normally e.g. incorrect inputs; positive test cases: cases where the SUT is expected to work normally
Requirement specification versus system specification
The requirement specification need not be the same as the system specification. Some example differences:
Requirements specification
System specification
limited to how the system behaves in normal working conditions
can also include details on how it will fail gracefully when pushed beyond limits, how to recover, etc. specification
written in terms of problems that need to be solved (e.g. provide a method to locate an email quickly)
written in terms of how the system solves those problems (e.g. explain the email search feature)
specifies the interface available for intended end-users
could contain additional APIs not available for end-users (for the use of developers/testers)
However, in many cases one document serves as both a requirement specification and a system specification.
Passing system tests does not necessarily mean passing acceptance testing. Some examples:
The system might work on the testbed environments but might not work the same way in the deployment environment, due to subtle differences between the two environments.
The system might conform to the system specification but could fail to solve the problem it was supposed to solve for the user, due to flaws in the system design.
Exercises:
Statements about system testing and acceptance testing
+
Acceptance testing
What
Can explain acceptance testing
Acceptance testing (aka User Acceptance Testing (UAT): test the system to ensure it meets the user requirements.
Acceptance tests give an assurance to the customer that the system does what it is intended to do. Acceptance test cases are often defined at the beginning of the project, usually based on the use case specification. Successful completion of UAT is often a prerequisite to the project sign-off.
Acceptance versus system testing
Can explain the differences between system testing and acceptance testing
Acceptance testing comes after system testing. Similar to system testing, acceptance testing involves testing the whole system.
Some differences between system testing and acceptance testing:
System Testing
Acceptance Testing
Done against the system specification
Done against the requirements specification
Done by testers of the project team
Done by a team that represents the customer
Done on the development environment or a test bed
Done on the deployment site or on a close simulation of the deployment site
Both negative and positive test cases
More focus on positive test cases
Note: negative test cases: cases where the SUT is not expected to work normally e.g. incorrect inputs; positive test cases: cases where the SUT is expected to work normally
Requirement specification versus system specification
The requirement specification need not be the same as the system specification. Some example differences:
Requirements specification
System specification
limited to how the system behaves in normal working conditions
can also include details on how it will fail gracefully when pushed beyond limits, how to recover, etc. specification
written in terms of problems that need to be solved (e.g. provide a method to locate an email quickly)
written in terms of how the system solves those problems (e.g. explain the email search feature)
specifies the interface available for intended end-users
could contain additional APIs not available for end-users (for the use of developers/testers)
However, in many cases one document serves as both a requirement specification and a system specification.
Passing system tests does not necessarily mean passing acceptance testing. Some examples:
The system might work on the testbed environments but might not work the same way in the deployment environment, due to subtle differences between the two environments.
The system might conform to the system specification but could fail to solve the problem it was supposed to solve for the user, due to flaws in the system design.
Exercises:
Statements about system testing and acceptance testing
Acceptance testing (aka User Acceptance Testing (UAT): test the system to ensure it meets the user requirements.
Acceptance tests give an assurance to the customer that the system does what it is intended to do. Acceptance test cases are often defined at the beginning of the project, usually based on the use case specification. Successful completion of UAT is often a prerequisite to the project sign-off.
+
What
Acceptance testing (aka User Acceptance Testing (UAT): test the system to ensure it meets the user requirements.
Acceptance tests give an assurance to the customer that the system does what it is intended to do. Acceptance test cases are often defined at the beginning of the project, usually based on the use case specification. Successful completion of UAT is often a prerequisite to the project sign-off.
Alpha testing is performed by the users, under controlled conditions set by the software development team.
Beta testing is performed by a selected subset of target users of the system in their natural work setting.
An open beta release is the release of not-yet-production-quality-but-almost-there software to the general population. For example, Google’s Gmail was in 'beta' for many years before the label was finally removed.
+
Alpha and beta testing
What
Can explain alpha and beta testing
Alpha testing is performed by the users, under controlled conditions set by the software development team.
Beta testing is performed by a selected subset of target users of the system in their natural work setting.
An open beta release is the release of not-yet-production-quality-but-almost-there software to the general population. For example, Google’s Gmail was in 'beta' for many years before the label was finally removed.
Alpha testing is performed by the users, under controlled conditions set by the software development team.
Beta testing is performed by a selected subset of target users of the system in their natural work setting.
An open beta release is the release of not-yet-production-quality-but-almost-there software to the general population. For example, Google’s Gmail was in 'beta' for many years before the label was finally removed.
+
What
Alpha testing is performed by the users, under controlled conditions set by the software development team.
Beta testing is performed by a selected subset of target users of the system in their natural work setting.
An open beta release is the release of not-yet-production-quality-but-almost-there software to the general population. For example, Google’s Gmail was in 'beta' for many years before the label was finally removed.
Developer testing is the testing done by the developers themselves as opposed to dedicated testers or end-users.
Why
Can explain the need for early developer testing
Delaying testing until the full product is complete has a number of disadvantages:
Locating the cause of a test case failure is difficult due to the larger search space; in a large system, the search space could be millions of lines of code, written by hundreds of developers! The failure may also be due to multiple inter-related bugs.
Fixing a bug found during such testing could result in major rework, especially if the bug originated from the design or during requirements specification i.e. a faulty design or faulty requirements.
One bug might 'hide' other bugs, which could emerge only after the first bug is fixed.
The delivery may have to be delayed if too many bugs are found during testing.
Therefore, it is better to do early testing, as hinted by the popular rule of thumb given below, also illustrated by the graph below it.
The earlier a bug is found, the easier and cheaper to have it fixed.
Such early testing software is usually, and often by necessity, done by the developers themselves i.e., developer testing.
Exercises:
Implications of developers testing their own code
Cost of bug fixing over time
+
Developer testing
What
Can explain developer testing
Developer testing is the testing done by the developers themselves as opposed to dedicated testers or end-users.
Why
Can explain the need for early developer testing
Delaying testing until the full product is complete has a number of disadvantages:
Locating the cause of a test case failure is difficult due to the larger search space; in a large system, the search space could be millions of lines of code, written by hundreds of developers! The failure may also be due to multiple inter-related bugs.
Fixing a bug found during such testing could result in major rework, especially if the bug originated from the design or during requirements specification i.e. a faulty design or faulty requirements.
One bug might 'hide' other bugs, which could emerge only after the first bug is fixed.
The delivery may have to be delayed if too many bugs are found during testing.
Therefore, it is better to do early testing, as hinted by the popular rule of thumb given below, also illustrated by the graph below it.
The earlier a bug is found, the easier and cheaper to have it fixed.
Such early testing software is usually, and often by necessity, done by the developers themselves i.e., developer testing.
Delaying testing until the full product is complete has a number of disadvantages:
Locating the cause of a test case failure is difficult due to the larger search space; in a large system, the search space could be millions of lines of code, written by hundreds of developers! The failure may also be due to multiple inter-related bugs.
Fixing a bug found during such testing could result in major rework, especially if the bug originated from the design or during requirements specification i.e. a faulty design or faulty requirements.
One bug might 'hide' other bugs, which could emerge only after the first bug is fixed.
The delivery may have to be delayed if too many bugs are found during testing.
Therefore, it is better to do early testing, as hinted by the popular rule of thumb given below, also illustrated by the graph below it.
The earlier a bug is found, the easier and cheaper to have it fixed.
Such early testing software is usually, and often by necessity, done by the developers themselves i.e., developer testing.
Exercises:
Implications of developers testing their own code
Cost of bug fixing over time
+
Why
Delaying testing until the full product is complete has a number of disadvantages:
Locating the cause of a test case failure is difficult due to the larger search space; in a large system, the search space could be millions of lines of code, written by hundreds of developers! The failure may also be due to multiple inter-related bugs.
Fixing a bug found during such testing could result in major rework, especially if the bug originated from the design or during requirements specification i.e. a faulty design or faulty requirements.
One bug might 'hide' other bugs, which could emerge only after the first bug is fixed.
The delivery may have to be delayed if too many bugs are found during testing.
Therefore, it is better to do early testing, as hinted by the popular rule of thumb given below, also illustrated by the graph below it.
The earlier a bug is found, the easier and cheaper to have it fixed.
Such early testing software is usually, and often by necessity, done by the developers themselves i.e., developer testing.
Dogfooding is when creators use their own product in order to experience how end users experience the product. The term is supposedly derived from the phrase "eating our own dogfood". Dogfooding is different from regular testing in that you become an end user, rather than pretend to be an end user.
For example, suppose a company produces an email client software. Then, getting some of the employees to use that software for their day-to-day emailing would be dogfooding. Such longer-term, consistent, and authentic use of the software can point to areas of improvement that regular testing (which is often short-term and 'simulated') might not encounter.
Note that dogfooding to be useful, observations need to be deliberately collected and processed i.e., just using the product itself is not enough.
Exercises:
Can Dogfooding improve product design?
+
Dogfooding
What
Can explain dogfooding
Dogfooding is when creators use their own product in order to experience how end users experience the product. The term is supposedly derived from the phrase "eating our own dogfood". Dogfooding is different from regular testing in that you become an end user, rather than pretend to be an end user.
For example, suppose a company produces an email client software. Then, getting some of the employees to use that software for their day-to-day emailing would be dogfooding. Such longer-term, consistent, and authentic use of the software can point to areas of improvement that regular testing (which is often short-term and 'simulated') might not encounter.
Note that dogfooding to be useful, observations need to be deliberately collected and processed i.e., just using the product itself is not enough.
Dogfooding is when creators use their own product in order to experience how end users experience the product. The term is supposedly derived from the phrase "eating our own dogfood". Dogfooding is different from regular testing in that you become an end user, rather than pretend to be an end user.
For example, suppose a company produces an email client software. Then, getting some of the employees to use that software for their day-to-day emailing would be dogfooding. Such longer-term, consistent, and authentic use of the software can point to areas of improvement that regular testing (which is often short-term and 'simulated') might not encounter.
Note that dogfooding to be useful, observations need to be deliberately collected and processed i.e., just using the product itself is not enough.
Exercises:
Can Dogfooding improve product design?
+
What
Dogfooding is when creators use their own product in order to experience how end users experience the product. The term is supposedly derived from the phrase "eating our own dogfood". Dogfooding is different from regular testing in that you become an end user, rather than pretend to be an end user.
For example, suppose a company produces an email client software. Then, getting some of the employees to use that software for their day-to-day emailing would be dogfooding. Such longer-term, consistent, and authentic use of the software can point to areas of improvement that regular testing (which is often short-term and 'simulated') might not encounter.
Note that dogfooding to be useful, observations need to be deliberately collected and processed i.e., just using the product itself is not enough.
Integration testing : testing whether different parts of the software work together (i.e. integrates) as expected. Integration tests aim to discover bugs in the 'glue code' related to how components interact with each other. These bugs are often the result of misunderstanding what the parts are supposed to do vs what the parts are actually doing.
Suppose a class Car uses classes Engine and Wheel. If the Car class assumed a Wheel can support a speed of up to 200 mph but the actual Wheel can only support a speed of up to 150 mph, it is the integration test that is supposed to uncover this discrepancy.
+
What
Integration testing : testing whether different parts of the software work together (i.e. integrates) as expected. Integration tests aim to discover bugs in the 'glue code' related to how components interact with each other. These bugs are often the result of misunderstanding what the parts are supposed to do vs what the parts are actually doing.
Suppose a class Car uses classes Engine and Wheel. If the Car class assumed a Wheel can support a speed of up to 200 mph but the actual Wheel can only support a speed of up to 150 mph, it is the integration test that is supposed to uncover this discrepancy.
When you modify a system, the modification may result in some unintended and undesirable effects on the system. Such an effect is called a regression.
Regression testing is the re-testing of the software to detect regressions. The typical way to detect regressions is retesting all related components, even if they had been tested before.
Regression testing is more effective when it is done frequently, after each small change. However, doing so can be prohibitively expensive if testing is done manually. Hence, regression testing is more practical when it is automated.
Exercises:
Regression Testing definition: T/F?
+
Regression testing
What
Can explain regression testing
When you modify a system, the modification may result in some unintended and undesirable effects on the system. Such an effect is called a regression.
Regression testing is the re-testing of the software to detect regressions. The typical way to detect regressions is retesting all related components, even if they had been tested before.
Regression testing is more effective when it is done frequently, after each small change. However, doing so can be prohibitively expensive if testing is done manually. Hence, regression testing is more practical when it is automated.
When you modify a system, the modification may result in some unintended and undesirable effects on the system. Such an effect is called a regression.
Regression testing is the re-testing of the software to detect regressions. The typical way to detect regressions is retesting all related components, even if they had been tested before.
Regression testing is more effective when it is done frequently, after each small change. However, doing so can be prohibitively expensive if testing is done manually. Hence, regression testing is more practical when it is automated.
Exercises:
Regression Testing definition: T/F?
+
What
When you modify a system, the modification may result in some unintended and undesirable effects on the system. Such an effect is called a regression.
Regression testing is the re-testing of the software to detect regressions. The typical way to detect regressions is retesting all related components, even if they had been tested before.
Regression testing is more effective when it is done frequently, after each small change. However, doing so can be prohibitively expensive if testing is done manually. Hence, regression testing is more practical when it is automated.
Can interpret alternate paths in activity diagrams
Tools → UML → Activity Diagrams →
-
Alternate paths
A branch node shows the start of alternate paths. Each control flow exiting a branch node has a guard condition: a boolean condition that should be true for execution to take that path. Exactly one of the guard conditions should be true at any given branch node.
A merge node shows the end of alternate paths.
Both branch nodes and merge nodes are diamond shapes. Guard conditions must be in square brackets.
The AD below shows alternate paths involved in the workflow of the activity shop for product:
Some acceptable simplifications (by convention):
Omitting the merge node if it doesn't cause any ambiguities.
Multiple arrows can starting from the same corner of a branch node.
Omitting the [Else] condition.
The AD below illustrates the simplifications mentioned above:
Exercises:
Which activity diagrams are correct?
+
Alternate paths
A branch node shows the start of alternate paths. Each control flow exiting a branch node has a guard condition: a boolean condition that should be true for execution to take that path. Exactly one of the guard conditions should be true at any given branch node.
A merge node shows the end of alternate paths.
Both branch nodes and merge nodes are diamond shapes. Guard conditions must be in square brackets.
The AD below shows alternate paths involved in the workflow of the activity shop for product:
Some acceptable simplifications (by convention):
Omitting the merge node if it doesn't cause any ambiguities.
Multiple arrows can starting from the same corner of a branch node.
Omitting the [Else] condition.
The AD below illustrates the simplifications mentioned above:
An activity diagram (AD) captures an activity through the actions and control flows that make up the activity.
An action is a single step in an activity. It is shown as a rectangle with rounded corners.
A control flow shows the flow of control from one action to the next. It is shown by drawing a line with an arrow-head to show the direction of the flow.
Note the slight difference between the start node and the end node which represent the start and the end of the activity, respectively.
This activity diagram shows the action sequence of the activity a passenger rides the bus:
Exercises:
Which activity diagrams are correct?
Alternate paths
Can interpret alternate paths in activity diagrams
A branch node shows the start of alternate paths. Each control flow exiting a branch node has a guard condition: a boolean condition that should be true for execution to take that path. Exactly one of the guard conditions should be true at any given branch node.
A merge node shows the end of alternate paths.
Both branch nodes and merge nodes are diamond shapes. Guard conditions must be in square brackets.
The AD below shows alternate paths involved in the workflow of the activity shop for product:
Some acceptable simplifications (by convention):
Omitting the merge node if it doesn't cause any ambiguities.
Multiple arrows can starting from the same corner of a branch node.
Omitting the [Else] condition.
The AD below illustrates the simplifications mentioned above:
Exercises:
Which activity diagrams are correct?
Parallel paths
Can interpret parallel paths in activity diagrams
Fork nodes indicate the start of flows of control.
Join nodes indicate the end of parallel paths.
Both have the same notation: a bar.
In a , execution along all parallel paths should be complete before the execution can start on the outgoing control flow of the join.
In this activity diagram (from an online shop website) the actions User browses products and System records browsing data happen in parallel. Both of them need to finish before the log out action can take place.
Exercises:
Which activity diagrams are correct?
Which sequence of actions are supported?
Rakes
Can use rakes in activity diagrams
The rake notation is used to indicate that a part of the activity is given as a separate diagram.
Here is the AD for a game of ‘Snakes and Ladders’.
The rake symbol (in the Move piece action above) is used to show that the action is described in another subsidiary activity diagram elsewhere. That diagram is given below.
Swimlanes
Can explain swimlanes in activity diagrams
It is possible to partition an activity diagram to show who is doing which action. Such partitioned activity diagrams are sometime called swimlane diagrams.
A simple example of a swimlane diagram:
+
Basic notations
Linear paths
Can interpret linear paths in activity diagrams
An activity diagram (AD) captures an activity through the actions and control flows that make up the activity.
An action is a single step in an activity. It is shown as a rectangle with rounded corners.
A control flow shows the flow of control from one action to the next. It is shown by drawing a line with an arrow-head to show the direction of the flow.
Note the slight difference between the start node and the end node which represent the start and the end of the activity, respectively.
This activity diagram shows the action sequence of the activity a passenger rides the bus:
Exercises:
Which activity diagrams are correct?
Alternate paths
Can interpret alternate paths in activity diagrams
A branch node shows the start of alternate paths. Each control flow exiting a branch node has a guard condition: a boolean condition that should be true for execution to take that path. Exactly one of the guard conditions should be true at any given branch node.
A merge node shows the end of alternate paths.
Both branch nodes and merge nodes are diamond shapes. Guard conditions must be in square brackets.
The AD below shows alternate paths involved in the workflow of the activity shop for product:
Some acceptable simplifications (by convention):
Omitting the merge node if it doesn't cause any ambiguities.
Multiple arrows can starting from the same corner of a branch node.
Omitting the [Else] condition.
The AD below illustrates the simplifications mentioned above:
Exercises:
Which activity diagrams are correct?
Parallel paths
Can interpret parallel paths in activity diagrams
Fork nodes indicate the start of flows of control.
Join nodes indicate the end of parallel paths.
Both have the same notation: a bar.
In a , execution along all parallel paths should be complete before the execution can start on the outgoing control flow of the join.
In this activity diagram (from an online shop website) the actions User browses products and System records browsing data happen in parallel. Both of them need to finish before the log out action can take place.
Exercises:
Which activity diagrams are correct?
Which sequence of actions are supported?
Rakes
Can use rakes in activity diagrams
The rake notation is used to indicate that a part of the activity is given as a separate diagram.
Here is the AD for a game of ‘Snakes and Ladders’.
The rake symbol (in the Move piece action above) is used to show that the action is described in another subsidiary activity diagram elsewhere. That diagram is given below.
Swimlanes
Can explain swimlanes in activity diagrams
It is possible to partition an activity diagram to show who is doing which action. Such partitioned activity diagrams are sometime called swimlane diagrams.
An activity diagram (AD) captures an activity through the actions and control flows that make up the activity.
An action is a single step in an activity. It is shown as a rectangle with rounded corners.
A control flow shows the flow of control from one action to the next. It is shown by drawing a line with an arrow-head to show the direction of the flow.
Note the slight difference between the start node and the end node which represent the start and the end of the activity, respectively.
This activity diagram shows the action sequence of the activity a passenger rides the bus:
Exercises:
Which activity diagrams are correct?
+
Linear paths
An activity diagram (AD) captures an activity through the actions and control flows that make up the activity.
An action is a single step in an activity. It is shown as a rectangle with rounded corners.
A control flow shows the flow of control from one action to the next. It is shown by drawing a line with an arrow-head to show the direction of the flow.
Note the slight difference between the start node and the end node which represent the start and the end of the activity, respectively.
This activity diagram shows the action sequence of the activity a passenger rides the bus:
Fork nodes indicate the start of flows of control.
Join nodes indicate the end of parallel paths.
Both have the same notation: a bar.
In a , execution along all parallel paths should be complete before the execution can start on the outgoing control flow of the join.
In this activity diagram (from an online shop website) the actions User browses products and System records browsing data happen in parallel. Both of them need to finish before the log out action can take place.
Exercises:
Which activity diagrams are correct?
Which sequence of actions are supported?
+
Parallel paths
Fork nodes indicate the start of flows of control.
Join nodes indicate the end of parallel paths.
Both have the same notation: a bar.
In a , execution along all parallel paths should be complete before the execution can start on the outgoing control flow of the join.
In this activity diagram (from an online shop website) the actions User browses products and System records browsing data happen in parallel. Both of them need to finish before the log out action can take place.
The rake notation is used to indicate that a part of the activity is given as a separate diagram.
Here is the AD for a game of ‘Snakes and Ladders’.
The rake symbol (in the Move piece action above) is used to show that the action is described in another subsidiary activity diagram elsewhere. That diagram is given below.
+
Rakes
The rake notation is used to indicate that a part of the activity is given as a separate diagram.
Here is the AD for a game of ‘Snakes and Ladders’.
The rake symbol (in the Move piece action above) is used to show that the action is described in another subsidiary activity diagram elsewhere. That diagram is given below.
It is possible to partition an activity diagram to show who is doing which action. Such partitioned activity diagrams are sometime called swimlane diagrams.
A simple example of a swimlane diagram:
+
Swimlanes
It is possible to partition an activity diagram to show who is doing which action. Such partitioned activity diagrams are sometime called swimlane diagrams.
UML activity diagrams (AD) can model workflows. Flow charts are another type of diagram that can model workflows. Activity diagrams are the UML equivalent of flow charts.
An activity diagram (AD) captures an activity through the actions and control flows that make up the activity.
An action is a single step in an activity. It is shown as a rectangle with rounded corners.
A control flow shows the flow of control from one action to the next. It is shown by drawing a line with an arrow-head to show the direction of the flow.
Note the slight difference between the start node and the end node which represent the start and the end of the activity, respectively.
This activity diagram shows the action sequence of the activity a passenger rides the bus:
Exercises:
Which activity diagrams are correct?
Alternate paths
Can interpret alternate paths in activity diagrams
A branch node shows the start of alternate paths. Each control flow exiting a branch node has a guard condition: a boolean condition that should be true for execution to take that path. Exactly one of the guard conditions should be true at any given branch node.
A merge node shows the end of alternate paths.
Both branch nodes and merge nodes are diamond shapes. Guard conditions must be in square brackets.
The AD below shows alternate paths involved in the workflow of the activity shop for product:
Some acceptable simplifications (by convention):
Omitting the merge node if it doesn't cause any ambiguities.
Multiple arrows can starting from the same corner of a branch node.
Omitting the [Else] condition.
The AD below illustrates the simplifications mentioned above:
Exercises:
Which activity diagrams are correct?
Parallel paths
Can interpret parallel paths in activity diagrams
Fork nodes indicate the start of flows of control.
Join nodes indicate the end of parallel paths.
Both have the same notation: a bar.
In a , execution along all parallel paths should be complete before the execution can start on the outgoing control flow of the join.
In this activity diagram (from an online shop website) the actions User browses products and System records browsing data happen in parallel. Both of them need to finish before the log out action can take place.
Exercises:
Which activity diagrams are correct?
Which sequence of actions are supported?
Rakes
Can use rakes in activity diagrams
The rake notation is used to indicate that a part of the activity is given as a separate diagram.
Here is the AD for a game of ‘Snakes and Ladders’.
The rake symbol (in the Move piece action above) is used to show that the action is described in another subsidiary activity diagram elsewhere. That diagram is given below.
Swimlanes
Can explain swimlanes in activity diagrams
It is possible to partition an activity diagram to show who is doing which action. Such partitioned activity diagrams are sometime called swimlane diagrams.
A simple example of a swimlane diagram:
+
Activity diagrams
Introduction
Introduction
Can explain activity diagrams
UML activity diagrams (AD) can model workflows. Flow charts are another type of diagram that can model workflows. Activity diagrams are the UML equivalent of flow charts.
An activity diagram (AD) captures an activity through the actions and control flows that make up the activity.
An action is a single step in an activity. It is shown as a rectangle with rounded corners.
A control flow shows the flow of control from one action to the next. It is shown by drawing a line with an arrow-head to show the direction of the flow.
Note the slight difference between the start node and the end node which represent the start and the end of the activity, respectively.
This activity diagram shows the action sequence of the activity a passenger rides the bus:
Exercises:
Which activity diagrams are correct?
Alternate paths
Can interpret alternate paths in activity diagrams
A branch node shows the start of alternate paths. Each control flow exiting a branch node has a guard condition: a boolean condition that should be true for execution to take that path. Exactly one of the guard conditions should be true at any given branch node.
A merge node shows the end of alternate paths.
Both branch nodes and merge nodes are diamond shapes. Guard conditions must be in square brackets.
The AD below shows alternate paths involved in the workflow of the activity shop for product:
Some acceptable simplifications (by convention):
Omitting the merge node if it doesn't cause any ambiguities.
Multiple arrows can starting from the same corner of a branch node.
Omitting the [Else] condition.
The AD below illustrates the simplifications mentioned above:
Exercises:
Which activity diagrams are correct?
Parallel paths
Can interpret parallel paths in activity diagrams
Fork nodes indicate the start of flows of control.
Join nodes indicate the end of parallel paths.
Both have the same notation: a bar.
In a , execution along all parallel paths should be complete before the execution can start on the outgoing control flow of the join.
In this activity diagram (from an online shop website) the actions User browses products and System records browsing data happen in parallel. Both of them need to finish before the log out action can take place.
Exercises:
Which activity diagrams are correct?
Which sequence of actions are supported?
Rakes
Can use rakes in activity diagrams
The rake notation is used to indicate that a part of the activity is given as a separate diagram.
Here is the AD for a game of ‘Snakes and Ladders’.
The rake symbol (in the Move piece action above) is used to show that the action is described in another subsidiary activity diagram elsewhere. That diagram is given below.
Swimlanes
Can explain swimlanes in activity diagrams
It is possible to partition an activity diagram to show who is doing which action. Such partitioned activity diagrams are sometime called swimlane diagrams.
UML activity diagrams (AD) can model workflows. Flow charts are another type of diagram that can model workflows. Activity diagrams are the UML equivalent of flow charts.
UML activity diagrams (AD) can model workflows. Flow charts are another type of diagram that can model workflows. Activity diagrams are the UML equivalent of flow charts.
UML activity diagrams (AD) can model workflows. Flow charts are another type of diagram that can model workflows. Activity diagrams are the UML equivalent of flow charts.
UML activity diagrams (AD) can model workflows. Flow charts are another type of diagram that can model workflows. Activity diagrams are the UML equivalent of flow charts.
UML uses a hollow diamond to indicate an aggregation.
Notation:
Example:
Aggregation vs Composition
The distinction between composition (◆) and aggregation (◇) is rather blurred. Martin Fowler’s famous book UML Distilled advocates omitting the aggregation symbol altogether because using it adds more confusion than clarity.
Exercises:
Which one is not recommended to use?
+
Aggregation
Aggregation
Can interpret aggregation in class diagrams
UML uses a hollow diamond to indicate an aggregation.
Notation:
Example:
Aggregation vs Composition
The distinction between composition (◆) and aggregation (◇) is rather blurred. Martin Fowler’s famous book UML Distilled advocates omitting the aggregation symbol altogether because using it adds more confusion than clarity.
UML uses a hollow diamond to indicate an aggregation.
Notation:
Example:
Aggregation vs Composition
The distinction between composition (◆) and aggregation (◇) is rather blurred. Martin Fowler’s famous book UML Distilled advocates omitting the aggregation symbol altogether because using it adds more confusion than clarity.
Exercises:
Which one is not recommended to use?
+
Aggregation
UML uses a hollow diamond to indicate an aggregation.
Notation:
Example:
Aggregation vs Composition
The distinction between composition (◆) and aggregation (◇) is rather blurred. Martin Fowler’s famous book UML Distilled advocates omitting the aggregation symbol altogether because using it adds more confusion than clarity.
Can explain what is the multiplicity of an association
Tools → UML → Class Diagrams → Associations →
-
Multiplicity
Commonly used multiplicities:
0..1 : optional, can be linked to 0 or 1 objects.
1 : compulsory, must be linked to one object at all times.
* : can be linked to 0 or more objects.
n..m : the number of linked objects must be within n to m inclusive.
In the diagram below, an Admin object administers (is in charge of) any number of students but a Student object must always be under the charge of exactly one Admin object.
In the diagram below,
Each student must be supervised by exactly one professor. i.e. There cannot be a student who doesn't have a supervisor or has multiple supervisors.
A professor cannot supervise more than 5 students but can have no students to supervise.
An admin can handle any number of professors and any number of students, including none.
A professor/student can be handled by any number of admins, including none.
Exercises:
Which statement agrees with the multiplicity shown in this diagram?
+
Multiplicity
Commonly used multiplicities:
0..1 : optional, can be linked to 0 or 1 objects.
1 : compulsory, must be linked to one object at all times.
* : can be linked to 0 or more objects.
n..m : the number of linked objects must be within n to m inclusive.
In the diagram below, an Admin object administers (is in charge of) any number of students but a Student object must always be under the charge of exactly one Admin object.
In the diagram below,
Each student must be supervised by exactly one professor. i.e. There cannot be a student who doesn't have a supervisor or has multiple supervisors.
A professor cannot supervise more than 5 students but can have no students to supervise.
An admin can handle any number of professors and any number of students, including none.
A professor/student can be handled by any number of admins, including none.
Exercises:
Which statement agrees with the multiplicity shown in this diagram?
An association can be shown as an attribute instead of a line.
Association multiplicities and the default value can be shown as part of the attribute using the following notation. Both are optional.
name: type [multiplicity] = default value
The diagram below depicts a multi-player Square Game being played on a board comprising of 100 squares. Each of the squares may be occupied with any number of pieces, each belonging to a certain player.
A Piece may or may not be on a Square. Note how that association can be replaced by an isOn attribute of the Piece class. The isOn attribute can either be null or hold a reference to a Square object, matching the 0..1 multiplicity of the association it replaces. The default value is null.
The association that a Board has 100 Squares can be shown in either of these two ways:
Show each association as either an attribute or a line but not both. A line is preferred as it is easier to spot.
Diagram (a) given below shows the 'author' association between the Book class and the Person class as a line while (b) shows the same association as an attribute in the Book class. Both are correct and the two are equivalent. But (c) is not correct as it uses both a line and an attribute to show the same association.
(a) (b) (c)
+
Associations as attributes
Associations as attributes
Can show an association as an attribute
An association can be shown as an attribute instead of a line.
Association multiplicities and the default value can be shown as part of the attribute using the following notation. Both are optional.
name: type [multiplicity] = default value
The diagram below depicts a multi-player Square Game being played on a board comprising of 100 squares. Each of the squares may be occupied with any number of pieces, each belonging to a certain player.
A Piece may or may not be on a Square. Note how that association can be replaced by an isOn attribute of the Piece class. The isOn attribute can either be null or hold a reference to a Square object, matching the 0..1 multiplicity of the association it replaces. The default value is null.
The association that a Board has 100 Squares can be shown in either of these two ways:
Show each association as either an attribute or a line but not both. A line is preferred as it is easier to spot.
Diagram (a) given below shows the 'author' association between the Book class and the Person class as a line while (b) shows the same association as an attribute in the Book class. Both are correct and the two are equivalent. But (c) is not correct as it uses both a line and an attribute to show the same association.
An association can be shown as an attribute instead of a line.
Association multiplicities and the default value can be shown as part of the attribute using the following notation. Both are optional.
name: type [multiplicity] = default value
The diagram below depicts a multi-player Square Game being played on a board comprising of 100 squares. Each of the squares may be occupied with any number of pieces, each belonging to a certain player.
A Piece may or may not be on a Square. Note how that association can be replaced by an isOn attribute of the Piece class. The isOn attribute can either be null or hold a reference to a Square object, matching the 0..1 multiplicity of the association it replaces. The default value is null.
The association that a Board has 100 Squares can be shown in either of these two ways:
Show each association as either an attribute or a line but not both. A line is preferred as it is easier to spot.
Diagram (a) given below shows the 'author' association between the Book class and the Person class as a line while (b) shows the same association as an attribute in the Book class. Both are correct and the two are equivalent. But (c) is not correct as it uses both a line and an attribute to show the same association.
(a) (b) (c)
+
Associations as attributes
An association can be shown as an attribute instead of a line.
Association multiplicities and the default value can be shown as part of the attribute using the following notation. Both are optional.
name: type [multiplicity] = default value
The diagram below depicts a multi-player Square Game being played on a board comprising of 100 squares. Each of the squares may be occupied with any number of pieces, each belonging to a certain player.
A Piece may or may not be on a Square. Note how that association can be replaced by an isOn attribute of the Piece class. The isOn attribute can either be null or hold a reference to a Square object, matching the 0..1 multiplicity of the association it replaces. The default value is null.
The association that a Board has 100 Squares can be shown in either of these two ways:
Show each association as either an attribute or a line but not both. A line is preferred as it is easier to spot.
Diagram (a) given below shows the 'author' association between the Book class and the Person class as a line while (b) shows the same association as an attribute in the Book class. Both are correct and the two are equivalent. But (c) is not correct as it uses both a line and an attribute to show the same association.
The basic UML notations used to represent a class:
A Table class shown in UML notation:
The equivalent code
The 'Operations' compartment and/or the 'Attributes' compartment may be omitted if such details are not important for the task at hand. Similarly, some attributes/operations can be omitted if not relevant to the purpose of the diagram. 'Attributes' always appear above the 'Operations' compartment. All operations should be in one compartment rather than each operation in a separate compartment. Same goes for attributes.
The visibility of attributes and operations is used to indicate the level of access allowed for each attribute or operation. The types of visibility and their exact meanings depend on the programming language used. Here are some common visibilities and how they are indicated in a class diagram:
+ : public
- : private
# : protected
~ : package private
How visibilities map to programming language features
Table class with visibilities shown:
The equivalent code
Generic classes can be shown as given below. The notation format is shown on the left, followed by two examples.
Exercises:
Which classes are correct?
Draw Car class
+
Classes
What
Can draw UML classes
The basic UML notations used to represent a class:
A Table class shown in UML notation:
The equivalent code
The 'Operations' compartment and/or the 'Attributes' compartment may be omitted if such details are not important for the task at hand. Similarly, some attributes/operations can be omitted if not relevant to the purpose of the diagram. 'Attributes' always appear above the 'Operations' compartment. All operations should be in one compartment rather than each operation in a separate compartment. Same goes for attributes.
The visibility of attributes and operations is used to indicate the level of access allowed for each attribute or operation. The types of visibility and their exact meanings depend on the programming language used. Here are some common visibilities and how they are indicated in a class diagram:
+ : public
- : private
# : protected
~ : package private
How visibilities map to programming language features
Table class with visibilities shown:
The equivalent code
Generic classes can be shown as given below. The notation format is shown on the left, followed by two examples.
The basic UML notations used to represent a class:
A Table class shown in UML notation:
The equivalent code
The 'Operations' compartment and/or the 'Attributes' compartment may be omitted if such details are not important for the task at hand. Similarly, some attributes/operations can be omitted if not relevant to the purpose of the diagram. 'Attributes' always appear above the 'Operations' compartment. All operations should be in one compartment rather than each operation in a separate compartment. Same goes for attributes.
The visibility of attributes and operations is used to indicate the level of access allowed for each attribute or operation. The types of visibility and their exact meanings depend on the programming language used. Here are some common visibilities and how they are indicated in a class diagram:
+ : public
- : private
# : protected
~ : package private
How visibilities map to programming language features
Table class with visibilities shown:
The equivalent code
Generic classes can be shown as given below. The notation format is shown on the left, followed by two examples.
Exercises:
Which classes are correct?
Draw Car class
+
What
The basic UML notations used to represent a class:
A Table class shown in UML notation:
The equivalent code
The 'Operations' compartment and/or the 'Attributes' compartment may be omitted if such details are not important for the task at hand. Similarly, some attributes/operations can be omitted if not relevant to the purpose of the diagram. 'Attributes' always appear above the 'Operations' compartment. All operations should be in one compartment rather than each operation in a separate compartment. Same goes for attributes.
The visibility of attributes and operations is used to indicate the level of access allowed for each attribute or operation. The types of visibility and their exact meanings depend on the programming language used. Here are some common visibilities and how they are indicated in a class diagram:
+ : public
- : private
# : protected
~ : package private
How visibilities map to programming language features
Table class with visibilities shown:
The equivalent code
Generic classes can be shown as given below. The notation format is shown on the left, followed by two examples.
An interface is shown similar to a class with an additional keyword <<interface>>. When a class implements an interface, it is shown similar to class inheritance except a dashed line is used instead of a solid line.
The AcademicStaff and the AdminStaff classes implement the SalariedStaff interface.
+
Interfaces
Interfaces
Can interpret interfaces in class diagrams
An interface is shown similar to a class with an additional keyword <<interface>>. When a class implements an interface, it is shown similar to class inheritance except a dashed line is used instead of a solid line.
The AcademicStaff and the AdminStaff classes implement the SalariedStaff interface.
An interface is shown similar to a class with an additional keyword <<interface>>. When a class implements an interface, it is shown similar to class inheritance except a dashed line is used instead of a solid line.
The AcademicStaff and the AdminStaff classes implement the SalariedStaff interface.
+
Interfaces
An interface is shown similar to a class with an additional keyword <<interface>>. When a class implements an interface, it is shown similar to class inheritance except a dashed line is used instead of a solid line.
The AcademicStaff and the AdminStaff classes implement the SalariedStaff interface.
UML class diagrams describe the structure (but not the behavior) of an OOP solution. These are possibly the most often used diagrams in the industry and are an indispensable tool for an OO programmer.
An example class diagram:
+
Introduction
What
Can explain/identify class diagrams
UML class diagrams describe the structure (but not the behavior) of an OOP solution. These are possibly the most often used diagrams in the industry and are an indispensable tool for an OO programmer.
UML class diagrams describe the structure (but not the behavior) of an OOP solution. These are possibly the most often used diagrams in the industry and are an indispensable tool for an OO programmer.
An example class diagram:
+
What
UML class diagrams describe the structure (but not the behavior) of an OOP solution. These are possibly the most often used diagrams in the industry and are an indispensable tool for an OO programmer.
A constraint can be given inside a note, within curly braces. Natural language or a formal notation such as OCL (Object Constraint Language) may be used to specify constraints.
Example:
+
Constraints
A constraint can be given inside a note, within curly braces. Natural language or a formal notation such as OCL (Object Constraint Language) may be used to specify constraints.
UML notes can augment UML diagrams with additional information. These notes can be shown connected to a particular element in the diagram or can be shown without a connection. The diagram below shows examples of both.
Example:
Constraints
Can specify constraints in UML diagrams
A constraint can be given inside a note, within curly braces. Natural language or a formal notation such as OCL (Object Constraint Language) may be used to specify constraints.
Example:
+
Notes
Notes
Can use UML notes
UML notes can augment UML diagrams with additional information. These notes can be shown connected to a particular element in the diagram or can be shown without a connection. The diagram below shows examples of both.
Example:
Constraints
Can specify constraints in UML diagrams
A constraint can be given inside a note, within curly braces. Natural language or a formal notation such as OCL (Object Constraint Language) may be used to specify constraints.
UML notes can augment UML diagrams with additional information. These notes can be shown connected to a particular element in the diagram or can be shown without a connection. The diagram below shows examples of both.
Example:
+
Notes
UML notes can augment UML diagrams with additional information. These notes can be shown connected to a particular element in the diagram or can be shown without a connection. The diagram below shows examples of both.
Can interpret sequence diagrams with alternative paths
Tools → UML → Sequence Diagrams →
-
Alternative paths
UML uses alt frames to indicate alternative paths.
Notation:
Minefield calls the Cell#setMine method if the cell is supposed to be a mined cell, and calls the Cell:setMineCount(...) method otherwise.
No more than one alternative partitions be executed in an alt frame. That is, it is acceptable for none of the alternative partitions to be executed but it is not acceptable for multiple partitions to be executed.
+
Alternative paths
UML uses alt frames to indicate alternative paths.
Notation:
Minefield calls the Cell#setMine method if the cell is supposed to be a mined cell, and calls the Cell:setMineCount(...) method otherwise.
No more than one alternative partitions be executed in an alt frame. That is, it is acceptable for none of the alternative partitions to be executed but it is not acceptable for multiple partitions to be executed.
Can interpret sequence diagrams with basic notation
Tools → UML → Sequence Diagrams →
-
Basic
Notation:
This sequence diagram shows some interactions between a human user and the Text UI of a Minesweeper game.
The player runs the newgame action on the TextUi object which results in the TextUi showing the minefield to the player. Then, the player runs the clear x y command; in response, the TextUi object shows the updated minefield.
The :TextUi in the above example denotes an unnamed instance of the class TextUi. If there were two instances of TextUi in the diagram, they can be distinguished by naming them e.g. TextUi1:TextUi and TextUi2:TextUi.
Arrows representing method calls should be solid arrows while those representing method returns should be dashed arrows.
Note that unlike in object diagrams, the class/object name is not underlined in sequence diagrams.
The arrowhead style depends on the type of method call. Arrows showing synchronous (i.e., the caller method is blocked from doing anything else until the called method returns) should use filled arrowheads (e.g., ). Most method calls (e.g., normal Java method calls) are synchronous. Asynchronous method calls (i.e., the caller method does not have to wait till the called method returns) are shown using lined arrowheads (e.g., ). As the latter is out of scope for this textbook, all sequence diagram arrow heads you encournter in this textbook will be of the first type.
[Common notation error] Activation bar too long: The activation bar of a method cannot start before the method call arrives and a method cannot remain active after the method has returned. In the two sequence diagrams below, the one on the left commits this error because the activation bar starts before the method Foo#xyz() is called and remains active after the method returns.
[Common notation error] Broken activation bar: The activation bar should remain unbroken while the method is being executed, from the point the method is called until the method returns. In the two sequence diagrams below, the one on the left commits this error because the activation bar for the method Foo#abc() is broken in the middle.
+
Basic
Notation:
This sequence diagram shows some interactions between a human user and the Text UI of a Minesweeper game.
The player runs the newgame action on the TextUi object which results in the TextUi showing the minefield to the player. Then, the player runs the clear x y command; in response, the TextUi object shows the updated minefield.
The :TextUi in the above example denotes an unnamed instance of the class TextUi. If there were two instances of TextUi in the diagram, they can be distinguished by naming them e.g. TextUi1:TextUi and TextUi2:TextUi.
Arrows representing method calls should be solid arrows while those representing method returns should be dashed arrows.
Note that unlike in object diagrams, the class/object name is not underlined in sequence diagrams.
The arrowhead style depends on the type of method call. Arrows showing synchronous (i.e., the caller method is blocked from doing anything else until the called method returns) should use filled arrowheads (e.g., ). Most method calls (e.g., normal Java method calls) are synchronous. Asynchronous method calls (i.e., the caller method does not have to wait till the called method returns) are shown using lined arrowheads (e.g., ). As the latter is out of scope for this textbook, all sequence diagram arrow heads you encournter in this textbook will be of the first type.
[Common notation error] Activation bar too long: The activation bar of a method cannot start before the method call arrives and a method cannot remain active after the method has returned. In the two sequence diagrams below, the one on the left commits this error because the activation bar starts before the method Foo#xyz() is called and remains active after the method returns.
[Common notation error] Broken activation bar: The activation bar should remain unbroken while the method is being executed, from the point the method is called until the method returns. In the two sequence diagrams below, the one on the left commits this error because the activation bar for the method Foo#abc() is broken in the middle.
Can interpret sequence diagrams with minimal notation
Tools → UML → Sequence Diagrams →
-
Minimal notation
To reduce clutter, optional elements (e.g, activation bars, return arrows) may be omitted if the omission does not result in ambiguities or loss of . Informal operation descriptions such as those given in the example below can be used, if more precise details are not required for the task at hand.
A minimal sequence diagram
If method parameters don't matter to the purpose of the sequence diagram, you can omit them using ... e.g., use foo(...) instead of foo(int size, double weight).
+
Minimal notation
To reduce clutter, optional elements (e.g, activation bars, return arrows) may be omitted if the omission does not result in ambiguities or loss of . Informal operation descriptions such as those given in the example below can be used, if more precise details are not required for the task at hand.
A minimal sequence diagram
If method parameters don't matter to the purpose of the sequence diagram, you can omit them using ... e.g., use foo(...) instead of foo(int size, double weight).
Can interpret sequence diagrams with parallel paths
Tools → UML → Sequence Diagrams →
-
Parallel paths
UML uses par frames to indicate parallel paths.
Notation:
Logic is calling methods CloudServer#poll() and LocalData#poll() in parallel.
If you show parallel paths in a sequence diagram, the corresponding Java implementation is likely to be multi-threadedbecause a normal Java program cannot do multiple things at the same time.
+
Parallel paths
UML uses par frames to indicate parallel paths.
Notation:
Logic is calling methods CloudServer#poll() and LocalData#poll() in parallel.
If you show parallel paths in a sequence diagram, the corresponding Java implementation is likely to be multi-threadedbecause a normal Java program cannot do multiple things at the same time.
Can interpret sequence diagrams with reference frames
Tools → UML → Sequence Diagrams →
-
Reference frames
UML uses ref frame to allow a segment of the interaction to be omitted and shown as a separate sequence diagram. Reference frames help you to break complicated sequence diagrams into multiple parts or simply to omit details you are not interested in showing.
Notation:
The details of the get minefield appearance interactions have been omitted from the diagram.
Those details are shown in a separate sequence diagram given below.
+
Reference frames
UML uses ref frame to allow a segment of the interaction to be omitted and shown as a separate sequence diagram. Reference frames help you to break complicated sequence diagrams into multiple parts or simply to omit details you are not interested in showing.
Notation:
The details of the get minefield appearance interactions have been omitted from the diagram.
Those details are shown in a separate sequence diagram given below.
Method calls to static (i.e., class-level) methods are received by the class itself, not an instance of that class. You can use <<class>> to show that a participant is the class itself.
In this example, m calls the static method Person.getMaxAge() and also the setAge() method of a Person object p.
Here is the Person class, for reference:
+
Calls to static methods
Method calls to static (i.e., class-level) methods are received by the class itself, not an instance of that class. You can use <<class>> to show that a participant is the class itself.
In this example, m calls the static method Person.getMaxAge() and also the setAge() method of a Person object p.
This week, we cover one more course briefing segment, given below. As with other course briefing segments, it is compulsory to watch.
tP Briefing (Part 1 -- Getting Started)
Video 9 mins
As usual, the weekly briefing will be hybrid mode, and optional to attend/watch.
We strongly recommend you to have a team project meeting before the tutorial. Keep notes of the meeting, and update project documents -- the tutor will ask for those during the tutorial.
Do the following during the meeting:
Finish the tP tasks allocated for the week.
Help each other finish iP tasks. Tasks allocated to this week are especially troublesome and some peer help can be very useful.
On a separate note, our guidelines on dealing with technical problems:
This week, we cover one more course briefing segment, given below. As with other course briefing segments, it is compulsory to watch.
tP Briefing (Part 1 -- Getting Started)
Video 9 mins
As usual, the weekly briefing will be hybrid mode, and optional to attend/watch.
We strongly recommend you to have a team project meeting before the tutorial. Keep notes of the meeting, and update project documents -- the tutor will ask for those during the tutorial.
Do the following during the meeting:
Finish the tP tasks allocated for the week.
Help each other finish iP tasks. Tasks allocated to this week are especially troublesome and some peer help can be very useful.
On a separate note, our guidelines on dealing with technical problems:
This week, we cover one more course briefing segment, given below. As with other course briefing segments, it is compulsory to watch.
tP Briefing (Part 1 -- Getting Started)
Video 9 mins
As usual, the weekly briefing will be hybrid mode, and optional to attend/watch.
We strongly recommend you to have a team project meeting before the tutorial. Keep notes of the meeting, and update project documents -- the tutor will ask for those during the tutorial.
Do the following during the meeting:
Finish the tP tasks allocated for the week.
Help each other finish iP tasks. Tasks allocated to this week are especially troublesome and some peer help can be very useful.
On a separate note, our guidelines on dealing with technical problems:
This week, we cover one more course briefing segment, given below. As with other course briefing segments, it is compulsory to watch.
tP Briefing (Part 1 -- Getting Started)
Video 9 mins
As usual, the weekly briefing will be hybrid mode, and optional to attend/watch.
We strongly recommend you to have a team project meeting before the tutorial. Keep notes of the meeting, and update project documents -- the tutor will ask for those during the tutorial.
Do the following during the meeting:
Finish the tP tasks allocated for the week.
Help each other finish iP tasks. Tasks allocated to this week are especially troublesome and some peer help can be very useful.
On a separate note, our guidelines on dealing with technical problems:
A summary of the week, and announcements relevant to that week, will appear in this Summary tab.
In each week, go through all the tabs for that week (i.e., Topics, Admin Info, ...) given at the top of this page and follow the instructions in them. FYI, a full timeline is available too.
This week, there are things for you to do before the upcoming weekly briefing (refer the above tabs for details).
Our first weekly briefing will be on Fri, Jan 19th . It will be fully online. Subsequent briefings will be hybrid.
Our tutorials start on week 3.
[CS2103T Students]: Of the many weekly sessions that appear under CS2103T, only two actually belong to CS2103T. Those are, the lecture slot on Friday 1600-1800, and the 1-hour tutorial slot. The other two 2-hour slots belong to the CS2101 course. CS2101 tutorials start in week 1.
A summary of the week, and announcements relevant to that week, will appear in this Summary tab.
In each week, go through all the tabs for that week (i.e., Topics, Admin Info, ...) given at the top of this page and follow the instructions in them. FYI, a full timeline is available too.
This week, there are things for you to do before the upcoming weekly briefing (refer the above tabs for details).
Our first weekly briefing will be on Fri, Jan 19th . It will be fully online. Subsequent briefings will be hybrid.
Our tutorials start on week 3.
[CS2103T Students]: Of the many weekly sessions that appear under CS2103T, only two actually belong to CS2103T. Those are, the lecture slot on Friday 1600-1800, and the 1-hour tutorial slot. The other two 2-hour slots belong to the CS2101 course. CS2101 tutorials start in week 1.
Weekly quiz: Read weekly topics allocated for this week and submit the weekly quiz before the quiz deadline (i.e., before the following
-weekly briefing).
Heads up! Attendance is compulsory for the weekly briefing coming up on Fri, Apr 5th , as we use that time slot for the Practical Exam Dry Run (graded).
Heads up! Attendance is compulsory for the weekly briefing coming up on Fri, Apr 5th , as we use that time slot for the Practical Exam Dry Run (graded).
Submit final peer evaluation on TEAMMATES Thu, Apr 18th 2359
Submit the PE-Readiness Quiz before the PE
1Submit final peer evaluation on TEAMMATES Thu, Apr 18th 2359
Submission will open within one day after the final submission (i.e., sometime in Tue, Apr 16th). If you did not receive the submission link, you can get TEAMMATES to resend the link by going to TEAMMATES link recovery page
-and entering your NUSNET email address. Remember to check your spam folder as well.
Admin Peer Evaluations → Session: Final Peer Evaluation
Session: Final Peer Evaluation
Held at the end of the tP.
This peer evaluation is compulsory. Not only it will count for weekly participation, those who don't submit will not get a chance to rebut peer evaluations received.
This session includes all questions from the Midterm Peer Evaluation:
In addition, it contains these additional questions:
QDo you agree with the contributions claimed by team members, as stated in their PPP?
QRank team members based on their ability/potential to lead a software project team (rank 1 is strongest)
2Submit the PE-Readiness Quiz before the PE
Submit this quiz (on Canvas) to confirm that you know important details about the PE.
+and entering your NUSNET email address. Remember to check your spam folder as well.
Admin Peer Evaluations → Session: Final Peer Evaluation
Session: Final Peer Evaluation
Held at the end of the tP.
This peer evaluation is compulsory. Not only it will count for weekly participation, those who don't submit will not get a chance to rebut peer evaluations received.
This session includes all questions from the Midterm Peer Evaluation:
In addition, it contains these additional questions:
QDo you agree with the contributions claimed by team members, as stated in their PPP?
QRank team members based on their ability/potential to lead a software project team (rank 1 is strongest)
2Submit the PE-Readiness Quiz before the PE
Submit this quiz (on Canvas) to confirm that you know important details about the PE.
Please note that tutors will not be available for consultations after this week (doesn't apply to this week). As most tutors are UG students who have their own exams, we do not allow CS2103/T students to take up tutors' time during the reading week and the exam period. This is especially important for CS2103/T tutors as they have to spend a significant time on adjudicating disputed PE bugs.
Any questions related to the course should be posted in the forum (preferred) or sent to the prof.
Topics:
No topics allocated to this week.
Admin:
Submit final peer evaluation on TEAMMATES Thu, Apr 18th 2359
Submit the PE-Readiness Quiz before the PE
tP:v1.4
Do final polish-ups
Submit deliverables Mon, Apr 15th 23:59
Submit the demo video Wed, Apr 17th 2359
Wrap up the milestone Wed, Apr 17th 2359
Prepare for the practical exam
Attend the practical exam during the weekly briefing on Fri, Apr 19th
Please note that tutors will not be available for consultations after this week (doesn't apply to this week). As most tutors are UG students who have their own exams, we do not allow CS2103/T students to take up tutors' time during the reading week and the exam period. This is especially important for CS2103/T tutors as they have to spend a significant time on adjudicating disputed PE bugs.
Any questions related to the course should be posted in the forum (preferred) or sent to the prof.
Topics:
No topics allocated to this week.
Admin:
Submit final peer evaluation on TEAMMATES Thu, Apr 18th 2359
Submit the PE-Readiness Quiz before the PE
tP:v1.4
Do final polish-ups
Submit deliverables Mon, Apr 15th 23:59
Submit the demo video Wed, Apr 17th 2359
Wrap up the milestone Wed, Apr 17th 2359
Prepare for the practical exam
Attend the practical exam during the weekly briefing on Fri, Apr 19th
Read weekly topics allocated for this week. Submit the Weekly Quiz (on Canvas) to test your knowledge of those topics.
-Weekly quizzes are counted for participation.
FAQ: When can we see the quiz answers?
In most quizzes, answers will be released within a day after the quiz deadline.
On a related note, if you are not confident about the answer you've selected for a question, you are welcome to discuss it in the forum, even if the submission deadline is not over yet (but one question per thread please).
2Get connected to our communication channels
If you haven't done so already, follow the 'Preparation' instructions of the following panel, to get connected with the communication channels used by the course.
In most quizzes, answers will be released within a day after the quiz deadline.
On a related note, if you are not confident about the answer you've selected for a question, you are welcome to discuss it in the forum, even if the submission deadline is not over yet (but one question per thread please).
2Get connected to our communication channels
If you haven't done so already, follow the 'Preparation' instructions of the following panel, to get connected with the communication channels used by the course.
Add Increments while committing frequently: Level-0, Level-1, Level-2, Level-3, Level-4, A-TextUiTesting, Level-5, Level-6, A-Enums Fri, Jan 26th 1600
iP
The iP (and the tP) undergoes changes after each semester. As such, teething issues are a possibility. If you encounter any problem while doing the iP/tP, please post in the forum so that we can take necessary actions.
We discourage you from doing project tasks allocated to future weeks. Reasons: In order to help you gain (and also to better simulate real projects), we want the project work to be and span a longer period, rather than to be done as a short burst.
Reminder: as per iP grading criteria, some increments need to to be done in each week for you to get full marks. That is, clumping all the iP work into a short burst of work will not earn you full marks.
Please follow instructions carefully. Any deviations can cause our grading scripts to miss your work (and result in you not getting credit for the work).
Deadline:
Note the typical deadline weekly project tasks:
As per the above, you have until the next weekly briefing to finish this week's iP tasks, but if you fail to do so, we won't penalize you if you can catch up within one more week after that deadline.
1Learn about the project
Read the following two sections, if you haven't done so already:
Admin iP - Overview
Admin iP - Grading
2Set up prerequisites
Ensure you have followed the Preparation sections of the following course tools:
Admin Programming Language
3Set up the project in your computer
Read through this week's topics before starting the project. If you encounter technical problems while doing the iP, follow the guidelines given below:
Keep the fork name as ip or else our grading scripts will not be able to detect it. You can change the fork name to something else after the semester (and the grading) is over e.g., after receiving your grade for the course.
Untick the [ ] Copy the master branch only option so that you get a copy of the full repo.
Enable the issue tracker of your fork (Go to Settings of your fork, scroll to the Features section, and tick the Issues checkbox). Reason: at times we post feedback on your issue tracker.
If the issue tracker is enabled, you should be able to visit the following URL https://github.com/{your_user_name}/ip/issues e.g., https://github.com/johnDoe/ip/issues
Clone the fork onto your computer.
Set up the project in your IDE as explained in the README file, if you plan to use an IDE for the project.
4 Add Increments while committing frequently: Level-0, Level-1, Level-2, Level-3, Level-4, A-TextUiTesting, Level-5, Level-6, A-Enums Fri, Jan 26th 1600
Implement the given below in the given order.
Keep in mind ...
From this point onward, commit code at important points. Minimally, commit after completing each increment. Remember not to commit .class files and any other file that should not be revision controlled.
From this point onward, after completing each increment,
git tag the commit with the exact increment ID e.g., Level-2, A-TextUiTesting
If you encounter issues connecting Sourcetree with your GitHub account, refer to this Sourcetree Tutorial.
Remember to take note of our plagiarism policies, if you haven't done so already:
iP feels like 'same same' ...?
As you do the iP, if you feel like you are not learning enough new stuff as you've done similar work before (at least on the Java/OOP side), there is an alternative approach you can take to the iP. See the panel below if you are interested.
DukeLevel-0: Rename, Greet, Exit
DukeLevel-1: Echo
DukeLevel-2: Add, List
DukeLevel-3: Mark as Done
DukeLevel-4: ToDo, Event, Deadline
DukeA-TextUiTesting: Automated Text UI Testing
DukeLevel-5: Handle Errors
DukeLevel-6: Delete
DukeA-Enums: Use Enumsif-applicable
FAQ about iP increments
Q. How are the iP git tags used in grading?
Q. What if I discovered a bug after I finished an increment?
Q. I did multiple increments in the same commit. How to fix?
Q. The requirements of an increment scheduled for this week is already satisfied by the work I did in an earlier week. What now?
Q. My iP increments are not detected by the dashboard because I forgot to push my tags earlier. What now?
If you encounter issues connecting Sourcetree with your GitHub account, refer to this Sourcetree Tutorial.
Remember to take note of our plagiarism policies, if you haven't done so already:
iP feels like 'same same' ...?
As you do the iP, if you feel like you are not learning enough new stuff as you've done similar work before (at least on the Java/OOP side), there is an alternative approach you can take to the iP. See the panel below if you are interested.
DukeLevel-0: Rename, Greet, Exit
DukeLevel-1: Echo
DukeLevel-2: Add, List
DukeLevel-3: Mark as Done
DukeLevel-4: ToDo, Event, Deadline
DukeA-TextUiTesting: Automated Text UI Testing
DukeLevel-5: Handle Errors
DukeLevel-6: Delete
DukeA-Enums: Use Enumsif-applicable
FAQ about iP increments
Q. How are the iP git tags used in grading?
Q. What if I discovered a bug after I finished an increment?
Q. I did multiple increments in the same commit. How to fix?
Q. The requirements of an increment scheduled for this week is already satisfied by the work I did in an earlier week. What now?
Q. My iP increments are not detected by the dashboard because I forgot to push my tags earlier. What now?
[CS2103 students only] Form teams during the tutorial
1Submit weekly quiz
Weekly quiz: Read weekly topics allocated for this week and submit the weekly quiz before the quiz deadline (i.e., before the following
-weekly briefing).
2[CS2103 students only] Form teams during the tutorial
We start tutorials this week, starting from Wed, Jan 31st . The tutorial timetable is on the course website. There are no tutorials on Fri, Jan 26th.
In-video quizzes can earn you bonus participation marks!
Starting from week 3, some pre-recorded videos in the Topics tab will contain in-video quizzes. Videos containing quizzes are labelled VideoQ+ (instead of the usual Video)
[MUST-WATCH] One More Course Briefing Segment (9 minutes)
One more course briefing segment to watch this week: The bulk of the course briefing was released as pre-recorded videos last week. There are few more remaining parts which will be released closer to events they cover (e.g., the part covering the exam will be released closer to the exam). This week, we cover one more course briefing segment, given below:
CS2103/T Pitfalls (and how to avoid them)
Video 9 mins
The weekly briefing for this week will be done in hybrid mode -- you can attend it F2F (@LT15 Fri, Jan 26th from 4pm), join via Zoom, watch the recording later, or skip it altogether). iP Help Session: This week's' briefing will be followed by a F2F help session for those who are stuck in the iP due to technical difficulties. To attend that help session, be in LT15 at least by 4.50pm.
We start tutorials this week, starting from Wed, Jan 31st . The tutorial timetable is on the course website. There are no tutorials on Fri, Jan 26th.
In-video quizzes can earn you bonus participation marks!
Starting from week 3, some pre-recorded videos in the Topics tab will contain in-video quizzes. Videos containing quizzes are labelled VideoQ+ (instead of the usual Video)
[MUST-WATCH] One More Course Briefing Segment (9 minutes)
One more course briefing segment to watch this week: The bulk of the course briefing was released as pre-recorded videos last week. There are few more remaining parts which will be released closer to events they cover (e.g., the part covering the exam will be released closer to the exam). This week, we cover one more course briefing segment, given below:
CS2103/T Pitfalls (and how to avoid them)
Video 9 mins
The weekly briefing for this week will be done in hybrid mode -- you can attend it F2F (@LT15 Fri, Jan 26th from 4pm), join via Zoom, watch the recording later, or skip it altogether). iP Help Session: This week's' briefing will be followed by a F2F help session for those who are stuck in the iP due to technical difficulties. To attend that help session, be in LT15 at least by 4.50pm.
Accept GitHub invitation from the course organization
Submit weekly quiz
1Accept GitHub invitation from the course organization
We will be adding you all to cs2103-AY2324S2 github org. Please accept the invitation sent by GitHub as you need to be a member of the org for some of the future course activities. If you did not receive the invitation link, you can use the link https://github.com/orgs/nus-cs2103-AY2324S2/invitation.
Worth 2 participation points
2Submit weekly quiz
Weekly quiz: Read weekly topics allocated for this week and submit the weekly quiz before the quiz deadline (i.e., before the following
-weekly briefing).
This week, we cover one more course briefing segment, given below. As with other course briefing segments, it is compulsory to watch.
tP Briefing (Part 1 -- Getting Started)
Video 9 mins
As usual, the weekly briefing will be hybrid mode, and optional to attend/watch.
We strongly recommend you to have a team project meeting before the tutorial. Keep notes of the meeting, and update project documents -- the tutor will ask for those during the tutorial.
Do the following during the meeting:
Finish the tP tasks allocated for the week.
Help each other finish iP tasks. Tasks allocated to this week are especially troublesome and some peer help can be very useful.
On a separate note, our guidelines on dealing with technical problems:
This week, we cover one more course briefing segment, given below. As with other course briefing segments, it is compulsory to watch.
tP Briefing (Part 1 -- Getting Started)
Video 9 mins
As usual, the weekly briefing will be hybrid mode, and optional to attend/watch.
We strongly recommend you to have a team project meeting before the tutorial. Keep notes of the meeting, and update project documents -- the tutor will ask for those during the tutorial.
Do the following during the meeting:
Finish the tP tasks allocated for the week.
Help each other finish iP tasks. Tasks allocated to this week are especially troublesome and some peer help can be very useful.
On a separate note, our guidelines on dealing with technical problems:
Weekly quiz: Read weekly topics allocated for this week and submit the weekly quiz before the quiz deadline (i.e., before the following
-weekly briefing).
Weekly quiz: Read weekly topics allocated for this week and submit the weekly quiz before the quiz deadline (i.e., before the following
-weekly briefing).
It's in the middle of week 7. As you know, CS2103/T week starts early. But we will honor the official timing of the recess week. Think of it like this:
-
Fri, Feb 23rd : CS2103/T Week 7 starts
Sat, Feb 24th - Sun, Mar 3rd : Recess week (Week 7 put on hold)
Mon, Mar 4th : CS2103/T Week 7 resumes
Recess week days are omitted from deadline calculations. For example, the quiz early submission deadline is week 7 Monday (Mon, Mar 4th ), not recess week Monday.
We have not scheduled any regular course activities during recess week. But you may be asked to catch up on missed activities in the previous weeks e.g., late submissions for the iP.
Sat, Feb 24th - Sun, Mar 3rd : Recess week (Week 7 put on hold)
Mon, Mar 4th : CS2103/T Week 7 resumes
Recess week days are omitted from deadline calculations. For example, the quiz early submission deadline is week 7 Monday (Mon, Mar 4th ), not recess week Monday.
We have not scheduled any regular course activities during recess week. But you may be asked to catch up on missed activities in the previous weeks e.g., late submissions for the iP.
You are encouraged to try this week's tutorial questions before the actual tutorial. Otherwise, we might not have enough time to finish all the questions during the tutorial hour.
You are encouraged to try this week's tutorial questions before the actual tutorial. Otherwise, we might not have enough time to finish all the questions during the tutorial hour.
Weekly quiz: Read weekly topics allocated for this week and submit the weekly quiz before the quiz deadline (i.e., before the following
-weekly briefing).
Similar to the previous week, you are encouraged to try this week's tutorial questions before the actual tutorial. Otherwise we might not have enough time to finish all the questions during the tutorial hour.
Similar to the previous week, you are encouraged to try this week's tutorial questions before the actual tutorial. Otherwise we might not have enough time to finish all the questions during the tutorial hour.
Measure test coverage in your own IDE. Take a screenshot showing test coverage details such as lines covered, lines not covered, percentage of coverage for different files etc.
Post a screenshot in the tutorial workspace document. An example is given below:
2 Exercise: OODMs
before the tutorial Do the following question. As before, you should freehand-draw the diagram. You can use the association class notation in the answer.
OODM for the Course domain
during the tutorial
Paste a screenshot/scan/photo of your answer in the online document.
Discuss a sample answer, as guided by the tutor.
When discussing OODMs, remember to avoid terms such as design, implementation, variable, method as these are terms used in the solution domain whereas an OODM is about the problem domain.
Bad "we can design it this way" Good "we can model it this way"
Bad "coordinator variable contains Foo objects" Good "Foo objects are playing the role of coordinator"
Bad "the implementation will be like this" Good "this model can support this object structure"
3 Exercise: Activity Diagrams
before the tutorial Do the following question, similar to the previous question.
Model workflow of a Burger shop
during the tutorial Post your answer and discuss, similar to the previous question.
Measure test coverage in your own IDE. Take a screenshot showing test coverage details such as lines covered, lines not covered, percentage of coverage for different files etc.
Post a screenshot in the tutorial workspace document. An example is given below:
2 Exercise: OODMs
before the tutorial Do the following question. As before, you should freehand-draw the diagram. You can use the association class notation in the answer.
OODM for the Course domain
during the tutorial
Paste a screenshot/scan/photo of your answer in the online document.
Discuss a sample answer, as guided by the tutor.
When discussing OODMs, remember to avoid terms such as design, implementation, variable, method as these are terms used in the solution domain whereas an OODM is about the problem domain.
Bad "we can design it this way" Good "we can model it this way"
Bad "coordinator variable contains Foo objects" Good "Foo objects are playing the role of coordinator"
Bad "the implementation will be like this" Good "this model can support this object structure"
3 Exercise: Activity Diagrams
before the tutorial Do the following question, similar to the previous question.
Model workflow of a Burger shop
during the tutorial Post your answer and discuss, similar to the previous question.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
The software architecture of a program or computing system is the structure or structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them. Architecture is concerned with the public side of interfaces; private details of elements—details having to do solely with internal implementation—are not architectural.
--- Software Architecture in Practice (2nd edition), Bass, Clements, and Kazman
The software architecture shows the overall organization of the system and can be viewed as a very high-level design. It usually consists of a set of interacting components that fit together to achieve the required functionality. It should be a simple and technically viable structure that is well-understood and agreed-upon by everyone in the development team, and it forms the basis for the implementation.
A possible architecture for a Minesweeper game:
Main components:
GUI: Graphical user interface
TextUi: Textual user interface
ATD: An automated test driver used for testing the game logic
Logic: Computation and logic of the game
Store: Storage and retrieval of game data (high scores etc.)
The architecture is typically designed by the software architect, who provides the technical vision of the system and makes high-level (i.e. architecture-level) technical decisions about the project.
Architecture diagrams
Reading
Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.
While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.
Minimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.
Avoid the indiscriminate use of double-headed arrows to show interactions between components.
Consider the two architecture diagrams of the same software given below. Because Diagram 2 uses double-headed arrows, the important fact that GUI has a bidirectional dependency with the Logic component is no longer captured.
Architectural styles
Introduction
What
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
Client-server architectural style
What
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
Transaction processing architectural style
What
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
Service-oriented architectural style
What
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
Event-driven architectural style
What
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
More
Using Styles
Most applications use a mix of these architectural styles.
An application can use a client-server architecture where the server component comprises several layers, i.e. it uses the n-tier architecture.
+-- Software Architecture in Practice (2nd edition), Bass, Clements, and Kazman
The software architecture shows the overall organization of the system and can be viewed as a very high-level design. It usually consists of a set of interacting components that fit together to achieve the required functionality. It should be a simple and technically viable structure that is well-understood and agreed-upon by everyone in the development team, and it forms the basis for the implementation.
A possible architecture for a Minesweeper game:
Main components:
GUI: Graphical user interface
TextUi: Textual user interface
ATD: An automated test driver used for testing the game logic
Logic: Computation and logic of the game
Store: Storage and retrieval of game data (high scores etc.)
The architecture is typically designed by the software architect, who provides the technical vision of the system and makes high-level (i.e. architecture-level) technical decisions about the project.
Architecture diagrams
Reading
Architecture diagrams are free-form diagrams. There is no universally adopted standard notation for architecture diagrams. Any symbols that reasonably describe the architecture may be used.
While architecture diagrams have no standard notation, try to follow these basic guidelines when drawing them.
Minimize the variety of symbols. If the symbols you choose do not have widely-understood meanings e.g. A drum symbol is widely-understood as representing a database, explain their meaning.
Avoid the indiscriminate use of double-headed arrows to show interactions between components.
Consider the two architecture diagrams of the same software given below. Because Diagram 2 uses double-headed arrows, the important fact that GUI has a bidirectional dependency with the Logic component is no longer captured.
Architectural styles
Introduction
What
Software architectures follow various high-level styles (aka architectural patterns), just like how building architectures follow various architecture styles.
In the n-tier style, higher layers make use of services provided by lower layers. Lower layers are independent of higher layers. Other names: multi-layered, layered.
Operating systems and network communication software often use n-tier style.
Client-server architectural style
What
The client-server style has at least one component playing the role of a server and at least one client component accessing the services of the server. This is an architectural style used often in distributed applications.
The online game and the web application below use the client-server style.
Transaction processing architectural style
What
The transaction processing style divides the workload of the system down to a number of transactions which are then given to a dispatcher that controls the execution of each transaction. Task queuing, ordering, undo etc. are handled by the dispatcher.
In this example from a banking system, transactions are generated by the terminals used by , which are then sent to a central dispatching unit, which in turn dispatches the transactions to various other units to execute.
Service-oriented architectural style
What
The service-oriented architecture (SOA) style builds applications by combining functionalities packaged as programmatically accessible services. SOA aims to achieve interoperability between distributed services, which may not even be implemented using the same programming language. A common way to implement SOA is through the use of XML web services where the web is used as the medium for the services to interact, and XML is used as the language of communication between service providers and service users.
Suppose that Amazon.com provides a web service for customers to browse and buy merchandise, while HSBC provides a web service for merchants to charge HSBC credit cards. Using these web services, an ‘eBookShop’ web application can be developed that allows HSBC customers to buy merchandise from Amazon and pay for them using HSBC credit cards. Because both Amazon and HSBC services follow the SOA architecture, their web services can be reused by the web application, even if all three systems use different programming platforms.
Event-driven architectural style
What
Event-driven style controls the flow of the application by detecting from event emitters and communicating those events to interested event consumers. This architectural style is often used in GUIs.
When the ‘button clicked’ event occurs in a GUI, that event can be transmitted to components that are interested in reacting to that event. Similarly, events detected at a printer port can be transmitted to components related to operating the printer. The same event can be sent to multiple consumers too.
More
Using Styles
Most applications use a mix of these architectural styles.
An application can use a client-server architecture where the server component comprises several layers, i.e. it uses the n-tier architecture.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Design is the creative process of transforming the problem into a solution; the solution is also called design. -- 📖 Software Engineering Theory and Practice, Shari Lawrence; Atlee, Joanne M. Pfleeger
Software design has two main aspects:
Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.
Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Design is the creative process of transforming the problem into a solution; the solution is also called design. -- 📖 Software Engineering Theory and Practice, Shari Lawrence; Atlee, Joanne M. Pfleeger
Software design has two main aspects:
Product/external design: designing the external behavior of the product to meet the users' requirements. This is usually done by product designers with input from business analysts, user experience experts, user representatives, etc.
Implementation/internal design: designing how the product will be implemented to meet the required external behavior. This is usually done by software architects and software engineers.
The design of bigger systems needs to be done/shown at multiple levels.
This architecture diagram of se-edu/addressbook-level3 depicts the high-level design of the software.
Here are examples of lower level designs of some components of the same software:
UI
Logic
Storage
Top-down and bottom-up design
What
Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.
Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.
Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.
Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.
Agile design
What
Agile design can be contrasted with full upfront design in the following way:
Agile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. -- adapted from agilemodeling.com
The design of bigger systems needs to be done/shown at multiple levels.
This architecture diagram of se-edu/addressbook-level3 depicts the high-level design of the software.
Here are examples of lower level designs of some components of the same software:
UI
Logic
Storage
Top-down and bottom-up design
What
Multi-level design can be done in a top-down manner, bottom-up manner, or as a mix.
Top-down: Design the high-level design first and flesh out the lower levels later. This is especially useful when designing big and novel systems where the high-level design needs to be stable before lower levels can be designed.
Bottom-up: Design lower level components first and put them together to create the higher-level systems later. This is not usually scalable for bigger systems. One instance where this approach might work is when designing a variation of an existing system or re-purposing existing components to build a new system.
Mix: Design the top levels using the top-down approach but switch to a bottom-up approach when designing the bottom levels.
Agile design
What
Agile design can be contrasted with full upfront design in the following way:
Agile designs are emergent, they’re not defined up front. Your overall system design will emerge over time, evolving to fulfill new requirements and take advantage of new technologies as appropriate. Although you will often do some initial architectural modeling at the very beginning of a project, this will be just enough to get your team going. This approach does not produce a fully documented set of models in place before you may begin coding. -- adapted from agilemodeling.com
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Abstraction
What
Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.
The guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.
Data abstraction: abstracting away the lower level data items and thinking in terms of bigger entities
Within a certain software component, you might deal with a user data type, while ignoring the details contained in the user data item such as name, and date of birth. These details have been ‘abstracted away’ as they do not affect the task of that software component.
Control abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level
print(“Hello”) is an abstraction of the actual output mechanism within the computer.
Abstraction can be applied repeatedly to obtain progressively higher levels of abstraction.
An example of different levels of data abstraction: a File is a data item that is at a higher level than an array and an array is at a higher level than a bit.
An example of different levels of control abstraction: execute(Game) is at a higher level than print(Char) which is at a higher level than an Assembly language instruction MOV.
Abstraction is a general concept that is not limited to just data or control abstractions.
Some more general examples of abstraction:
An OOP class is an abstraction over related data and behaviors.
An architecture is a higher-level abstraction of the design of a software.
Models (e.g., UML models) are abstractions of some aspect of reality.
Coupling
What
Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:
Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).
Integration is harder because multiple components coupled with each other have to be integrated at the same time.
Testing and reuse of the module is harder due to its dependence on other modules.
In the example below, design A appears to have more coupling between the components than design B.
How
X is coupled to Y if a change to Y can potentially require a change in X.
If the Foo class calls the method Bar#read(), Foo is coupled to Bar because a change to Bar can potentially (but not always) require a change in the Foo class e.g. if the signature of Bar#read() is changed, Foo needs to change as well, but a change to the Bar#write() method may not require a change in the Foo class because Foo does not call Bar#write().
code for the above example
Some examples of coupling: A is coupled to B if,
A has access to the internal structure of B (this results in a very high level of coupling)
A and B depend on the same global variable
A calls B
A receives an object of B as a parameter or a return value
A inherits from B
A and B are required to follow the same data format or communication protocol
Cohesion
What
Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.
Higher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):
Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.
Lowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).
Lowers reusability of modules because they do not represent logical units of functionality.
How
Cohesion can be present in many forms. Some examples:
Code related to a single concept is kept together, e.g. the Student component handles everything related to students.
Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.
Code that manipulates the same data structure is kept together, e.g. the GameArchive component handles everything related to the storage and retrieval of game sessions.
Suppose a Payroll application contains a class that deals with writing data to the database. If the class includes some code to show an error dialog to the user if the database is unreachable, that class is not cohesive because it seems to be interacting with the user as well as the database.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Abstraction
What
Abstraction is a technique for dealing with complexity. It works by establishing a level of complexity we are interested in, and suppressing the more complex details below that level.
The guiding principle of abstraction is that only details that are relevant to the current perspective or the task at hand need to be considered. As most programs are written to solve complex problems involving large amounts of intricate details, it is impossible to deal with all these details at the same time. That is where abstraction can help.
Data abstraction: abstracting away the lower level data items and thinking in terms of bigger entities
Within a certain software component, you might deal with a user data type, while ignoring the details contained in the user data item such as name, and date of birth. These details have been ‘abstracted away’ as they do not affect the task of that software component.
Control abstraction: abstracting away details of the actual control flow to focus on tasks at a higher level
print(“Hello”) is an abstraction of the actual output mechanism within the computer.
Abstraction can be applied repeatedly to obtain progressively higher levels of abstraction.
An example of different levels of data abstraction: a File is a data item that is at a higher level than an array and an array is at a higher level than a bit.
An example of different levels of control abstraction: execute(Game) is at a higher level than print(Char) which is at a higher level than an Assembly language instruction MOV.
Abstraction is a general concept that is not limited to just data or control abstractions.
Some more general examples of abstraction:
An OOP class is an abstraction over related data and behaviors.
An architecture is a higher-level abstraction of the design of a software.
Models (e.g., UML models) are abstractions of some aspect of reality.
Coupling
What
Coupling is a measure of the degree of dependence between components, classes, methods, etc. Low coupling indicates that a component is less dependent on other components. High coupling (aka tight coupling or strong coupling) is discouraged due to the following disadvantages:
Maintenance is harder because a change in one module could cause changes in other modules coupled to it (i.e. a ripple effect).
Integration is harder because multiple components coupled with each other have to be integrated at the same time.
Testing and reuse of the module is harder due to its dependence on other modules.
In the example below, design A appears to have more coupling between the components than design B.
How
X is coupled to Y if a change to Y can potentially require a change in X.
If the Foo class calls the method Bar#read(), Foo is coupled to Bar because a change to Bar can potentially (but not always) require a change in the Foo class e.g. if the signature of Bar#read() is changed, Foo needs to change as well, but a change to the Bar#write() method may not require a change in the Foo class because Foo does not call Bar#write().
code for the above example
Some examples of coupling: A is coupled to B if,
A has access to the internal structure of B (this results in a very high level of coupling)
A and B depend on the same global variable
A calls B
A receives an object of B as a parameter or a return value
A inherits from B
A and B are required to follow the same data format or communication protocol
Cohesion
What
Cohesion is a measure of how strongly-related and focused the various responsibilities of a component are. A highly-cohesive component keeps related functionalities together while keeping out all other unrelated things.
Higher cohesion is better. Disadvantages of low cohesion (aka weak cohesion):
Lowers the understandability of modules as it is difficult to express module functionalities at a higher level.
Lowers maintainability because a module can be modified due to unrelated causes (reason: the module contains code unrelated to each other) or many modules may need to be modified to achieve a small change in behavior (reason: because the code related to that change is not localized to a single module).
Lowers reusability of modules because they do not represent logical units of functionality.
How
Cohesion can be present in many forms. Some examples:
Code related to a single concept is kept together, e.g. the Student component handles everything related to students.
Code that is invoked close together in time is kept together, e.g. all code related to initializing the system is kept together.
Code that manipulates the same data structure is kept together, e.g. the GameArchive component handles everything related to the storage and retrieval of game sessions.
Suppose a Payroll application contains a class that deals with writing data to the database. If the class includes some code to show an error dialog to the user if the database is unreachable, that class is not cohesive because it seems to be interacting with the user as well as the database.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Brainstorming
Brainstorming: A group activity designed to generate a large number of diverse and creative ideas for the solution of a problem.
In a brainstorming session there are no "bad" ideas. The aim is to generate ideas; not to validate them. Brainstorming encourages you to "think outside the box" and put "crazy" ideas on the table without fear of rejection.
User Surveys
Surveys can be used to solicit responses and opinions from a large number of stakeholders regarding a current product or a new product.
Observation
Observing users in their natural work environment can uncover product requirements. Usage data of an existing system can also be used to gather information about how an existing system is being used, which can help in building a better replacement e.g. to find the situations where the user makes mistakes when using the current system.
Interviews
Interviewing stakeholders and domain experts can produce useful information about project requirements.
Focus groups are a kind of informal interview within an interactive group setting. A group of people (e.g. potential users, beta testers) are asked about their understanding of a specific issue, process, product, advertisement, etc.
: How do focus groups work? - Hector Lanz extra
Prototyping
Prototype: A prototype is a mock up, a scaled down version, or a partial system constructed
to get users’ feedback.
to validate a technical concept (a "proof-of-concept" prototype).
to give a preview of what is to come, or to compare multiple alternatives on a small scale before committing fully to one alternative.
for early field-testing under controlled conditions.
Prototyping can uncover requirements, in particular, those related to how users interact with the system. UI prototypes or mock ups are often used in brainstorming sessions, or in meetings with the users to get quick feedback from them.
A mock up (also called a wireframe diagram) of a dialog box:
[source: plantuml.com]
Prototyping can be used for discovering as well as specifying requirements e.g. a UI prototype can serve as a specification of what to build.
Product Surveys
Studying existing products can unearth shortcomings of existing solutions that can be addressed by a new product. Product manuals and other forms of documentation of an existing system can tell us how the existing solutions work.
When developing a game for a mobile device, a look at a similar PC game can give insight into the kind of features and interactions the mobile game can offer.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Brainstorming
Brainstorming: A group activity designed to generate a large number of diverse and creative ideas for the solution of a problem.
In a brainstorming session there are no "bad" ideas. The aim is to generate ideas; not to validate them. Brainstorming encourages you to "think outside the box" and put "crazy" ideas on the table without fear of rejection.
User Surveys
Surveys can be used to solicit responses and opinions from a large number of stakeholders regarding a current product or a new product.
Observation
Observing users in their natural work environment can uncover product requirements. Usage data of an existing system can also be used to gather information about how an existing system is being used, which can help in building a better replacement e.g. to find the situations where the user makes mistakes when using the current system.
Interviews
Interviewing stakeholders and domain experts can produce useful information about project requirements.
Focus groups are a kind of informal interview within an interactive group setting. A group of people (e.g. potential users, beta testers) are asked about their understanding of a specific issue, process, product, advertisement, etc.
: How do focus groups work? - Hector Lanz extra
Prototyping
Prototype: A prototype is a mock up, a scaled down version, or a partial system constructed
to get users’ feedback.
to validate a technical concept (a "proof-of-concept" prototype).
to give a preview of what is to come, or to compare multiple alternatives on a small scale before committing fully to one alternative.
for early field-testing under controlled conditions.
Prototyping can uncover requirements, in particular, those related to how users interact with the system. UI prototypes or mock ups are often used in brainstorming sessions, or in meetings with the users to get quick feedback from them.
A mock up (also called a wireframe diagram) of a dialog box:
[source: plantuml.com]
Prototyping can be used for discovering as well as specifying requirements e.g. a UI prototype can serve as a specification of what to build.
Product Surveys
Studying existing products can unearth shortcomings of existing solutions that can be addressed by a new product. Product manuals and other forms of documentation of an existing system can tell us how the existing solutions work.
When developing a game for a mobile device, a look at a similar PC game can give insight into the kind of features and interactions the mobile game can offer.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Combining parts of a software product to form a whole is called integration. It is also one of the most troublesome tasks and it rarely goes smoothly.
Approaches
'Late and One Time' vs 'Early and Frequent'
In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.
Late and one-time: wait till all components are completed and integrate all finished components near the end of the project.
This approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.
Early and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.
A can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system.
Here is an animation that compares the two approaches:
Big-Bang vs Incremental Integration
Big-bang integration: integrate all components at the same time.
Big-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.
Incremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way.
Here is an animation that compares the two approaches:
Build Automation
What
Build automation tools automate the steps of the build process, usually by means of build scripts.
In a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).
Some of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.
However, most big projects use specialized build tools to automate complex build processes.
Some other build tools: Grunt (JavaScript), Rake (Ruby)
Some build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.
Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too.
Continuous Integration and Continuous Deployment
An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.
A natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Combining parts of a software product to form a whole is called integration. It is also one of the most troublesome tasks and it rarely goes smoothly.
Approaches
'Late and One Time' vs 'Early and Frequent'
In terms of timing and frequency, there are two general approaches to integration: late and one-time, early and frequent.
Late and one-time: wait till all components are completed and integrate all finished components near the end of the project.
This approach is not recommended because integration often causes many component incompatibilities (due to previous miscommunications and misunderstandings) to surface which can lead to delivery delays i.e. Late integration → incompatibilities found → major rework required → cannot meet the delivery date.
Early and frequent: integrate early and evolve each part in parallel, in small steps, re-integrating frequently.
A can be written first. This can be done by one developer, possibly the one in charge of integration. After that, all developers can flesh out the skeleton in parallel, adding one feature at a time. After each feature is done, simply integrate the new code into the main system.
Here is an animation that compares the two approaches:
Big-Bang vs Incremental Integration
Big-bang integration: integrate all components at the same time.
Big-bang is not recommended because it will uncover too many problems at the same time which could make debugging and bug-fixing more complex than when problems are uncovered incrementally.
Incremental integration: integrate a few components at a time. This approach is better than big-bang integration because it surfaces integration problems in a more manageable way.
Here is an animation that compares the two approaches:
Build Automation
What
Build automation tools automate the steps of the build process, usually by means of build scripts.
In a non-trivial project, building a product from its source code can be a complex multi-step process. For example, it can include steps such as: pull code from the revision control system, compile, link, run automated tests, automatically update release documents (e.g. build number), package into a distributable, push to repo, deploy to a server, delete temporary files created during building/testing, email developers of the new build, and so on. Furthermore, this build process can be done ‘on demand’, it can be scheduled (e.g. every day at midnight) or it can be triggered by various events (e.g. triggered by a code push to the revision control system).
Some of these build steps such as compiling, linking and packaging, are already automated in most modern IDEs. For example, several steps happen automatically when the ‘build’ button of the IDE is clicked. Some IDEs even allow customization of this build process to some extent.
However, most big projects use specialized build tools to automate complex build processes.
Some other build tools: Grunt (JavaScript), Rake (Ruby)
Some build tools also serve as dependency management tools. Modern software projects often depend on third party libraries that evolve constantly. That means developers need to download the correct version of the required libraries and update them regularly. Therefore, dependency management is an important part of build automation. Dependency management tools can automate that aspect of a project.
Maven and Gradle, in addition to managing the build process, can play the role of dependency management tools too.
Continuous Integration and Continuous Deployment
An extreme application of build automation is called continuous integration (CI) in which integration, building, and testing happens automatically after each code change.
A natural extension of CI is Continuous Deployment (CD) where the changes are not only integrated continuously, but also deployed to end-users at the same time.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a 'roadmap' for the software developers to manage the development effort. The roadmap describes the aims of the development stages, the outcome of each stage, and the workflow i.e. the relationship between stages.
Sequential Models
The sequential model, also called the waterfall model, views software development as a linear process, in which the project is seen as progressing through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).
When one stage of the process is completed, it produces some artifacts to be used in the next stage. For example, the requirements stage produces a comprehensive list of requirements, to be used in the design phase.
A strict sequential model project moves only in the forward direction i.e., each stage is completed before starting the next. For example, once the requirements stage is over, there is no provision for revising the requirements later.
This model can work well for a project that produces software to solve a well-understood problem, in which case the requirements can remain stable and the effort can be estimated accurately. Furthermore, as each stage has a well-defined outcome, it is easy to track the progress of the project because one can gauge the project progress by monitoring which stage the project is in.
However, real-world projects often tackle problems that are not well-understood at the beginning, making them unsuitable for this model. For example, target users of a software product may not be able to state their requirements accurately at the start of the project, if they have not used a similar product before.
Iterative Models
The iterative model advocates producing the software by going through several iterations. Each of the iterations could potentially go through all the stages of the SDLC, from requirements gathering to deployment.
Each iteration produces a new version of the product, building upon the version produced in the previous iteration. Feedback from each iteration is factored into the subsequent iterations. For example, if an implementation task took longer than expected, the effort estimate for a similar tasks in future iterations can be adjusted accordingly. Similarly, if a feature introduced in the current iteration was not well-received by target users, it can be removed or tweaked in the next iteration.
The iterative model can be done in breadth-first or depth-first approach.
In the breadth-first approach, an iteration evolves all major components and all functionality areas in paralleli.e., most features and most will be updated in each iteration, producing a working product at the end of each iteration.
In the depth-first approach, an iteration focuses on fleshing out only some components or some functionality area. Accordingly, early depth-first iterations might not produce a working product.
Taking a Minesweeper game as an example,
breadth-first iterations will deliver a fully playable version early. These early versions may have primitive functionality, for example, a rudimentary text based UI, fixed board size, limited minefield layouts, etc. These functionalities (and corresponding components) will then be improved in later releases.
an early depth-first iteration could deliver the full user interface (UI) but with no game logic at all. Alternatively, an early iteration could focus on just the logic for generating initial layouts of the minefield. Neither will be a playable version of the game but both can be used to collect early feedback (about the UI, and the initial minefield layouts, respectively) which can then be used to guide later iterations.
A project can be done as a mixture of breadth-first and depth-first iterationsi.e., an iteration can contain some breadth-first work as well as some depth-first work, or, some iterations can be breadth-first while others are depth-first.
Agile Models
In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).
You are uncovering better ways of developing software by doing it and helping others do it.
Through this work you have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, you value the items on the left more. -- Extract from the Agile Manifesto
Subsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:
Requirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.
Instead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.
There is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.
There are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones.
Example process models
XP
The following description was adapted from the XP home page, emphasis added:
Extreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.
XP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.
XP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
XP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.
XP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.
Pair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP.
Scrum
This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:
Scrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:
The Scrum Master, who maintains the processes (typically in lieu of a project manager)
The Product Owner, who represents the stakeholders and the business
The Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.
A Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.
During each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.
In Scrum, on each day of a sprint, the team holds a daily scrum meeting called the "daily scrum.” Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.
...
During the daily scrum, each team member answers the following three questions:
What did you do yesterday?
What will you do today?
Are there any impediments in your way?
...
The daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Software development goes through different stages such as requirements, analysis, design, implementation and testing. These stages are collectively known as the software development life cycle (SDLC). There are several approaches, known as software development life cycle models (also called software process models), that describe different ways to go through the SDLC. Each process model prescribes a 'roadmap' for the software developers to manage the development effort. The roadmap describes the aims of the development stages, the outcome of each stage, and the workflow i.e. the relationship between stages.
Sequential Models
The sequential model, also called the waterfall model, views software development as a linear process, in which the project is seen as progressing through the development stages. The name waterfall stems from how the model is drawn to look like a waterfall (see below).
When one stage of the process is completed, it produces some artifacts to be used in the next stage. For example, the requirements stage produces a comprehensive list of requirements, to be used in the design phase.
A strict sequential model project moves only in the forward direction i.e., each stage is completed before starting the next. For example, once the requirements stage is over, there is no provision for revising the requirements later.
This model can work well for a project that produces software to solve a well-understood problem, in which case the requirements can remain stable and the effort can be estimated accurately. Furthermore, as each stage has a well-defined outcome, it is easy to track the progress of the project because one can gauge the project progress by monitoring which stage the project is in.
However, real-world projects often tackle problems that are not well-understood at the beginning, making them unsuitable for this model. For example, target users of a software product may not be able to state their requirements accurately at the start of the project, if they have not used a similar product before.
Iterative Models
The iterative model advocates producing the software by going through several iterations. Each of the iterations could potentially go through all the stages of the SDLC, from requirements gathering to deployment.
Each iteration produces a new version of the product, building upon the version produced in the previous iteration. Feedback from each iteration is factored into the subsequent iterations. For example, if an implementation task took longer than expected, the effort estimate for a similar tasks in future iterations can be adjusted accordingly. Similarly, if a feature introduced in the current iteration was not well-received by target users, it can be removed or tweaked in the next iteration.
The iterative model can be done in breadth-first or depth-first approach.
In the breadth-first approach, an iteration evolves all major components and all functionality areas in paralleli.e., most features and most will be updated in each iteration, producing a working product at the end of each iteration.
In the depth-first approach, an iteration focuses on fleshing out only some components or some functionality area. Accordingly, early depth-first iterations might not produce a working product.
Taking a Minesweeper game as an example,
breadth-first iterations will deliver a fully playable version early. These early versions may have primitive functionality, for example, a rudimentary text based UI, fixed board size, limited minefield layouts, etc. These functionalities (and corresponding components) will then be improved in later releases.
an early depth-first iteration could deliver the full user interface (UI) but with no game logic at all. Alternatively, an early iteration could focus on just the logic for generating initial layouts of the minefield. Neither will be a playable version of the game but both can be used to collect early feedback (about the UI, and the initial minefield layouts, respectively) which can then be used to guide later iterations.
A project can be done as a mixture of breadth-first and depth-first iterationsi.e., an iteration can contain some breadth-first work as well as some depth-first work, or, some iterations can be breadth-first while others are depth-first.
Agile Models
In 2001, a group of prominent software engineering practitioners met and brainstormed for an alternative to documentation-driven, heavyweight software development processes that were used in most large projects at the time. This resulted in something called the agile manifesto (a vision statement of what they were looking to do).
You are uncovering better ways of developing software by doing it and helping others do it.
Through this work you have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, you value the items on the left more. -- Extract from the Agile Manifesto
Subsequently, some of the signatories of the manifesto went on to create process models that try to follow it. These processes are collectively called agile processes. Some of the key features of agile approaches are:
Requirements are prioritized based on the needs of the user, are clarified regularly (at times almost on a daily basis) with the entire project team, and are factored into the development schedule as appropriate.
Instead of doing a very elaborate and detailed design and a project plan for the whole project, the team works based on a rough project plan and a high level design that evolves as the project goes on.
There is a strong emphasis on complete transparency and responsibility sharing among the team members. The team is responsible together for the delivery of the product. Team members are accountable, and regularly and openly share progress with each other and with the user.
There are a number of agile processes in the development world today. eXtreme Programming (XP) and Scrum are two of the well-known ones.
Example process models
XP
The following description was adapted from the XP home page, emphasis added:
Extreme Programming (XP) stresses customer satisfaction. Instead of delivering everything you could possibly want on some date far in the future, this process delivers the software you need as you need it.
XP aims to empower developers to confidently respond to changing customer requirements, even late in the life cycle.
XP emphasizes teamwork. Managers, customers, and developers are all equal partners in a collaborative team. XP implements a simple, yet effective environment enabling teams to become highly productive. The team self-organizes around the problem to solve it as efficiently as possible.
XP aims to improve a software project in five essential ways: communication, simplicity, feedback, respect, and courage. Extreme Programmers constantly communicate with their customers and fellow programmers. They keep their design simple and clean. They get feedback by testing their software starting on day one. Every small success deepens their respect for the unique contributions of each and every team member. With this foundation, Extreme Programmers are able to courageously respond to changing requirements and technology.
XP has a set of simple rules. XP is a lot like a jig saw puzzle with many small pieces. Individually the pieces make no sense, but when combined together a complete picture can be seen. This flow chart shows how Extreme Programming's rules work together.
Pair programming, CRC cards, project velocity, and standup meetings are some interesting topics related to XP. Refer to extremeprogramming.org to find out more about XP.
Scrum
This description of Scrum was adapted from Wikipedia [retrieved on 18/10/2011], emphasis added:
Scrum is a process skeleton that contains sets of practices and predefined roles. The main roles in Scrum are:
The Scrum Master, who maintains the processes (typically in lieu of a project manager)
The Product Owner, who represents the stakeholders and the business
The Team, a cross-functional group who do the actual analysis, design, implementation, testing, etc.
A Scrum project is divided into iterations called Sprints. A sprint is the basic unit of development in Scrum. Sprints tend to last between one week and one month, and are a timeboxed (i.e. restricted to a specific duration) effort of a constant length.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified and an estimated commitment for the sprint goal is made, and followed by a review or retrospective meeting, where the progress is reviewed and lessons for the next sprint are identified.
During each sprint, the team creates a potentially deliverable product increment (for example, working and tested software). The set of features that go into a sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint, and records this in the sprint backlog. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. Development is timeboxed such that the sprint must end on time; if requirements are not completed for any reason they are left out and returned to the product backlog. After a sprint is completed, the team demonstrates the use of the software.
Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication between all team members and disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.
In Scrum, on each day of a sprint, the team holds a daily scrum meeting called the "daily scrum.” Meetings are typically held in the same location and at the same time each day. Ideally, a daily scrum meeting is held in the morning, as it helps set the context for the coming day's work. These scrum meetings are strictly time-boxed to 15 minutes. This keeps the discussion brisk but relevant.
...
During the daily scrum, each team member answers the following three questions:
What did you do yesterday?
What will you do today?
Are there any impediments in your way?
...
The daily scrum meeting is not used as a problem-solving or issue resolution meeting. Issues that are raised are taken offline and usually dealt with by the relevant subgroup immediately after the meeting.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.
While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.
Validation vs Verification
Quality Assurance = Validation + Verification
QA involves checking two aspects:
Validation: are you building the right system i.e., are the requirements correct?
Verification: are you building the system right i.e., are the requirements implemented correctly?
Whether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too).
Code reviews
What
Code review is the systematic examination of code with the intention of finding where the code can be improved.
Reviews can be done in various forms. Some examples below:
Pull Request reviews
Project Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.
In pair programming
As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.
Formal inspections
Inspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:
the author - the creator of the artifact
the moderator - the planner and executor of the inspection meeting
the secretary - the recorder of the findings of the inspection
the inspector/reviewer - the one who inspects/reviews the artifact
Advantages of code review over testing:
It can detect functionality defects as well as other problems such as coding standard violations.
It can verify non-code artifacts and incomplete code.
It does not require test drivers or stubs.
Disadvantages:
It is a manual process and therefore, error prone.
Static analysis
What
Static analysis: Static analysis is the analysis of code without actually executing the code.
Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.
The term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.
Higher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.
Linters are a subset of static analyzers that specifically aim to locate areas where the code can be made 'cleaner'.
Formal verification
What
Formal verification uses mathematical techniques to prove the correctness of a program.
An introduction to Formal Methods
Advantages:
Formal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.
Disadvantages:
It only proves the compliance with the specification, but not the actual utility of the software.
It requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Software Quality Assurance (QA) is the process of ensuring that the software being built has the required levels of quality.
While testing is the most common activity used in QA, there are other complementary techniques such as static analysis, code reviews, and formal verification.
Validation vs Verification
Quality Assurance = Validation + Verification
QA involves checking two aspects:
Validation: are you building the right system i.e., are the requirements correct?
Verification: are you building the system right i.e., are the requirements implemented correctly?
Whether something belongs under validation or verification is not that important. What is more important is that both are done, instead of limiting to only verification (i.e., remember that the requirements can be wrong too).
Code reviews
What
Code review is the systematic examination of code with the intention of finding where the code can be improved.
Reviews can be done in various forms. Some examples below:
Pull Request reviews
Project Management Platforms such as GitHub and BitBucket allow the new code to be proposed as Pull Requests and provide the ability for others to review the code in the PR.
In pair programming
As pair programming involves two programmers working on the same code at the same time, there is an implicit review of the code by the other member of the pair.
Formal inspections
Inspections involve a group of people systematically examining project artifacts to discover defects. Members of the inspection team play various roles during the process, such as:
the author - the creator of the artifact
the moderator - the planner and executor of the inspection meeting
the secretary - the recorder of the findings of the inspection
the inspector/reviewer - the one who inspects/reviews the artifact
Advantages of code review over testing:
It can detect functionality defects as well as other problems such as coding standard violations.
It can verify non-code artifacts and incomplete code.
It does not require test drivers or stubs.
Disadvantages:
It is a manual process and therefore, error prone.
Static analysis
What
Static analysis: Static analysis is the analysis of code without actually executing the code.
Static analysis of code can find useful information such as unused variables, unhandled exceptions, style errors, and statistics. Most modern IDEs come with some inbuilt static analysis capabilities. For example, an IDE can highlight unused variables as you type the code into the editor.
The term static in static analysis refers to the fact that the code is analyzed without executing the code. In contrast, dynamic analysis requires the code to be executed to gather additional information about the code e.g., performance characteristics.
Higher-end static analysis tools (static analyzers) can perform more complex analysis such as locating potential bugs, memory leaks, inefficient code structures, etc.
Linters are a subset of static analyzers that specifically aim to locate areas where the code can be made 'cleaner'.
Formal verification
What
Formal verification uses mathematical techniques to prove the correctness of a program.
An introduction to Formal Methods
Advantages:
Formal verification can be used to prove the absence of errors. In contrast, testing can only prove the presence of errors, not their absence.
Disadvantages:
It only proves the compliance with the specification, but not the actual utility of the software.
It requires highly specialized notations and knowledge which makes it an expensive technique to administer. Therefore, formal verifications are more commonly used in safety-critical software such as flight control systems.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
A software requirement specifies a need to be fulfilled by the software product.
A software project may be,
a brownfield project i.e., develop a product to replace/update an existing software product
a greenfield project i.e., develop a totally new system from scratch
In either case, requirements need to be gathered, analyzed, specified, and managed.
Requirements come from stakeholders.
Stakeholder: An individual or an organization that is involved or potentially affected by the software project. e.g. users, sponsors, developers, interest groups, government agencies, etc.
Identifying requirements is often not easy. For example, stakeholders may not be aware of their precise needs, may not know how to communicate their requirements correctly, may not be willing to spend effort in identifying requirements, etc.
Non-Functional Requirements
Requirements can be divided into two in the following way:
Functional requirements specify what the system should do.
Non-functional requirements specify the constraints under which the system is developed and operated.
Some examples of non-functional requirement categories:
Data requirements e.g. size, , etc.,
Environment requirements e.g. technical environment in which the system would operate in or needs to be compatible with.
Accessibility, Capacity, Compliance with regulations, Documentation, Disaster recovery, Efficiency, Extensibility, Fault tolerance, Interoperability, Maintainability, Privacy, Portability, Quality, Reliability, Response time, Robustness, Scalability, Security, Stability, Testability, and more ...
Some concrete examples of NFRs
Business/domain rules: e.g. the size of the minefield cannot be smaller than five.
Constraints: e.g. the system should be backward compatible with data produced by earlier versions of the system; system testers are available only during the last month of the project; the total project cost should not exceed $1.5 million.
Technical requirements: e.g. the system should work on both 32-bit and 64-bit environments.
Performance requirements: e.g. the system should respond within two seconds.
Quality requirements: e.g. the system should be usable by a novice who has never carried out an online purchase.
Process requirements: e.g. the project is expected to adhere to a schedule that delivers a feature set every one month.
Notes about project scope: e.g. the product is not required to handle the printing of reports.
Any other noteworthy points: e.g. the game should not use images deemed offensive to those injured in real mine clearing activities.
You may have to spend an extra effort in digging NFRs out as early as possible because,
NFRs are easier to misse.g., stakeholders tend to think of functional requirements first
sometimes NFRs are critical to the success of the software.E.g. A web application that is too slow or that has low security is unlikely to succeed even if it has all the right functionality.
Quality of Requirements
Here are some characteristics of well-defined requirements [📖 zielczynski]:
Unambiguous
Testable (verifiable)
Clear (concise, terse, simple, precise)
Correct
Understandable
Feasible (realistic, possible)
Independent
Necessary
Implementation-free (i.e. abstract)
Besides these criteria for individual requirements, the set of requirements as a whole should be
Consistent
Non-redundant
Complete
Prioritizing Requirements
Requirements can be prioritized based on the importance and urgency, while keeping in mind the constraints of schedule, budget, staff resources, quality goals, and other constraints.
A common approach is to group requirements into priority categories. Note that all such scales are subjective, and stakeholders define the meaning of each level in the scale for the project at hand.
An example scheme for categorizing requirements:
Essential: The product must have this requirement fulfilled or else it does not get user acceptance.
Typical: Most similar systems have this feature although the product can survive without it.
Novel: New features that could differentiate this product from the rest.
Other schemes:
High, Medium, Low
Must-have, Nice-to-have, Unlikely-to-have
Level 0, Level 1, Level 2, ...
Some requirements can be discarded if they are considered ‘out of ’.
The requirement given below is for a Calendar application. Stakeholders of the software (e.g. product designers) might decide the following requirement is not in the scope of the software.
The software records the actual time taken by each task and show the difference between the actual and scheduled time for the task.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
A software requirement specifies a need to be fulfilled by the software product.
A software project may be,
a brownfield project i.e., develop a product to replace/update an existing software product
a greenfield project i.e., develop a totally new system from scratch
In either case, requirements need to be gathered, analyzed, specified, and managed.
Requirements come from stakeholders.
Stakeholder: An individual or an organization that is involved or potentially affected by the software project. e.g. users, sponsors, developers, interest groups, government agencies, etc.
Identifying requirements is often not easy. For example, stakeholders may not be aware of their precise needs, may not know how to communicate their requirements correctly, may not be willing to spend effort in identifying requirements, etc.
Non-Functional Requirements
Requirements can be divided into two in the following way:
Functional requirements specify what the system should do.
Non-functional requirements specify the constraints under which the system is developed and operated.
Some examples of non-functional requirement categories:
Data requirements e.g. size, , etc.,
Environment requirements e.g. technical environment in which the system would operate in or needs to be compatible with.
Accessibility, Capacity, Compliance with regulations, Documentation, Disaster recovery, Efficiency, Extensibility, Fault tolerance, Interoperability, Maintainability, Privacy, Portability, Quality, Reliability, Response time, Robustness, Scalability, Security, Stability, Testability, and more ...
Some concrete examples of NFRs
Business/domain rules: e.g. the size of the minefield cannot be smaller than five.
Constraints: e.g. the system should be backward compatible with data produced by earlier versions of the system; system testers are available only during the last month of the project; the total project cost should not exceed $1.5 million.
Technical requirements: e.g. the system should work on both 32-bit and 64-bit environments.
Performance requirements: e.g. the system should respond within two seconds.
Quality requirements: e.g. the system should be usable by a novice who has never carried out an online purchase.
Process requirements: e.g. the project is expected to adhere to a schedule that delivers a feature set every one month.
Notes about project scope: e.g. the product is not required to handle the printing of reports.
Any other noteworthy points: e.g. the game should not use images deemed offensive to those injured in real mine clearing activities.
You may have to spend an extra effort in digging NFRs out as early as possible because,
NFRs are easier to misse.g., stakeholders tend to think of functional requirements first
sometimes NFRs are critical to the success of the software.E.g. A web application that is too slow or that has low security is unlikely to succeed even if it has all the right functionality.
Quality of Requirements
Here are some characteristics of well-defined requirements [📖 zielczynski]:
Unambiguous
Testable (verifiable)
Clear (concise, terse, simple, precise)
Correct
Understandable
Feasible (realistic, possible)
Independent
Necessary
Implementation-free (i.e. abstract)
Besides these criteria for individual requirements, the set of requirements as a whole should be
Consistent
Non-redundant
Complete
Prioritizing Requirements
Requirements can be prioritized based on the importance and urgency, while keeping in mind the constraints of schedule, budget, staff resources, quality goals, and other constraints.
A common approach is to group requirements into priority categories. Note that all such scales are subjective, and stakeholders define the meaning of each level in the scale for the project at hand.
An example scheme for categorizing requirements:
Essential: The product must have this requirement fulfilled or else it does not get user acceptance.
Typical: Most similar systems have this feature although the product can survive without it.
Novel: New features that could differentiate this product from the rest.
Other schemes:
High, Medium, Low
Must-have, Nice-to-have, Unlikely-to-have
Level 0, Level 1, Level 2, ...
Some requirements can be discarded if they are considered ‘out of ’.
The requirement given below is for a Calendar application. Stakeholders of the software (e.g. product designers) might decide the following requirement is not in the scope of the software.
The software records the actual time taken by each task and show the difference between the actual and scheduled time for the task.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software.
When
While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:
The reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.
The reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.
Non-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.
The license of the reused software (or its dependencies) restrict how you can use/develop your software.
The reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.
Malicious code can sneak into your product via compromised dependencies.
APIs
What
An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.
The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.
When developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.
Libraries
What
A library is a collection of modular code that is general and can be used by other programs.
Java classes you get with the JDK (such as String, ArrayList, HashMap, etc.) are library classes that are provided in the default Java distribution.
Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008
built-in modules you get with Python (such as csv, random, sys, etc.) are libraries that are provided in the default Python distribution. Classes such as list, str, dict are built-in library classes that you get with Python.
Colorama is a Python library that can be used for colorizing text in a CLI.
How
These are the typical steps required to use a library:
Read the documentation to confirm that its functionality fits your needs.
Check the license to confirm that it allows reuse in the way you plan to reuse it. For example, some libraries might allow non-commercial use only.
Download the library and make it accessible to your project. Alternatively, you can configure your to do it for you.
Call the library API from your code where you need to use the library's functionality.
Frameworks
What
The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.
Running example:
IDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.
A software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.
Running example:
Eclipse is an IDE framework that can be used to create IDEs for different programming languages.
Some frameworks provide a complete implementation of a default behavior which makes them immediately usable.
Running example:
Eclipse is a fully functional Java IDE out-of-the-box.
A framework facilitates the adaptation and customization of some desired functionality.
Running example:
The Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.
Some frameworks cover only a specific component or an aspect.
JavaFX is a framework for creating Java GUIs. Tkinter is a GUI framework for Python.
More examples of frameworks
Frameworks for web-based applications: Drupal (PHP), Django (Python), Ruby on Rails (Ruby), Spring (Java)
Frameworks for testing: JUnit (Java), unittest (Python), Jest (JavaScript)
Frameworks vs Libraries
Although both frameworks and libraries are reuse mechanisms, there are notable differences:
Libraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended.e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.
Your code calls the library code while the framework code calls your code.Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.
Platforms
What
A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.
Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.
Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.
JavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.
.NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Introduction
What
Reuse is a major theme in software engineering practices. By reusing tried-and-tested components, the robustness of a new software system can be enhanced while reducing the manpower and time requirement. Reusable components come in many forms; it can be reusing a piece of code, a subsystem, or a whole software.
When
While you may be tempted to use many libraries/frameworks/platforms that seem to crop up on a regular basis and promise to bring great benefits, note that there are costs associated with reuse. Here are some:
The reused code may be an overkill (think using a sledgehammer to crack a nut), increasing the size of, and/or degrading the performance of, your software.
The reused software may not be mature/stable enough to be used in an important product. That means the software can change drastically and rapidly, possibly in ways that break your software.
Non-mature software has the risk of dying off as fast as they emerged, leaving you with a dependency that is no longer maintained.
The license of the reused software (or its dependencies) restrict how you can use/develop your software.
The reused software might have bugs, missing features, or security vulnerabilities that are important to your product, but not so important to the maintainers of that software, which means those flaws will not get fixed as fast as you need them to.
Malicious code can sneak into your product via compromised dependencies.
APIs
What
An Application Programming Interface (API) specifies the interface through which other programs can interact with a software component. It is a contract between the component and its clients.
The GitHub API is a collection of web request formats that the GitHub server accepts and their corresponding responses. You can write a program that interacts with GitHub through that API.
When developing large systems, if you define the API of each component early, the development team can develop the components in parallel because the future behavior of the other components are now more predictable.
Libraries
What
A library is a collection of modular code that is general and can be used by other programs.
Java classes you get with the JDK (such as String, ArrayList, HashMap, etc.) are library classes that are provided in the default Java distribution.
Natty is a Java library that can be used for parsing strings that represent dates e.g. The 31st of April in the year 2008
built-in modules you get with Python (such as csv, random, sys, etc.) are libraries that are provided in the default Python distribution. Classes such as list, str, dict are built-in library classes that you get with Python.
Colorama is a Python library that can be used for colorizing text in a CLI.
How
These are the typical steps required to use a library:
Read the documentation to confirm that its functionality fits your needs.
Check the license to confirm that it allows reuse in the way you plan to reuse it. For example, some libraries might allow non-commercial use only.
Download the library and make it accessible to your project. Alternatively, you can configure your to do it for you.
Call the library API from your code where you need to use the library's functionality.
Frameworks
What
The overall structure and execution flow of a specific category of software systems can be very similar. The similarity is an opportunity to reuse at a high scale.
Running example:
IDEs for different programming languages are similar in how they support editing code, organizing project files, debugging, etc.
A software framework is a reusable implementation of a software (or part thereof) providing generic functionality that can be selectively customized to produce a specific application.
Running example:
Eclipse is an IDE framework that can be used to create IDEs for different programming languages.
Some frameworks provide a complete implementation of a default behavior which makes them immediately usable.
Running example:
Eclipse is a fully functional Java IDE out-of-the-box.
A framework facilitates the adaptation and customization of some desired functionality.
Running example:
The Eclipse plugin system can be used to create an IDE for different programming languages while reusing most of the existing IDE features of Eclipse.
Some frameworks cover only a specific component or an aspect.
JavaFX is a framework for creating Java GUIs. Tkinter is a GUI framework for Python.
More examples of frameworks
Frameworks for web-based applications: Drupal (PHP), Django (Python), Ruby on Rails (Ruby), Spring (Java)
Frameworks for testing: JUnit (Java), unittest (Python), Jest (JavaScript)
Frameworks vs Libraries
Although both frameworks and libraries are reuse mechanisms, there are notable differences:
Libraries are meant to be used ‘as is’ while frameworks are meant to be customized/extended.e.g., writing plugins for Eclipse so that it can be used as an IDE for different languages (C++, PHP, etc.), adding modules and themes to Drupal, and adding test cases to JUnit.
Your code calls the library code while the framework code calls your code.Frameworks use a technique called inversion of control, aka the “Hollywood principle” (i.e. don’t call us, we’ll call you!). That is, you write code that will be called by the framework, e.g. writing test methods that will be called by the JUnit framework. In the case of libraries, your code calls libraries.
Platforms
What
A platform provides a runtime environment for applications. A platform is often bundled with various libraries, tools, frameworks, and technologies in addition to a runtime environment but the defining characteristic of a software platform is the presence of a runtime environment.
Technically, an operating system can be called a platform. For example, Windows PC is a platform for desktop applications while iOS is a platform for mobile applications.
Two well-known examples of platforms are JavaEE and .NET, both of which sit above the operating systems layer, and are used to develop enterprise applications. Infrastructure services such as connection pooling, load balancing, remote code execution, transaction management, authentication, security, messaging etc. are done similarly in most enterprise applications. Both JavaEE and .NET provide these services to applications in a customizable way without developers having to implement them from scratch every time.
JavaEE (Java Enterprise Edition) is both a framework and a platform for writing enterprise applications. The runtime used by JavaEE applications is the JVM (Java Virtual Machine) that can run on different Operating Systems.
.NET is a similar platform and framework. Its runtime is called CLR (Common Language Runtime) and it is usually used on Windows machines.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Team Structures
Given below are three commonly used team structures in software development. Irrespective of the team structure, it is a good practice to assign roles and responsibilities to different team members so that someone is clearly in charge of each aspect of the project. In comparison, the ‘everybody is responsible for everything’ approach can result in more chaos and hence slower progress.
Egoless team
In this structure, every team member is equal in terms of responsibility and accountability. When any decision is required, consensus must be reached. This team structure is also known as a democratic team structure. This team structure usually finds a good solution to a relatively hard problem as all team members contribute ideas.
However, the democratic nature of the team structure bears a higher risk of falling apart due to the absence of an authority figure to manage the team and resolve conflicts.
Chief programmer team
Frederick Brooks proposed that software engineers learn from the medical surgical team in an operating room. In such a team, there is always a chief surgeon, assisted by experts in other areas. Similarly, in a chief programmer team structure, there is a single authoritative figure, the chief programmer. Major decisions, e.g. system architecture, are made solely by him/her and obeyed by all other team members. The chief programmer directs and coordinates the effort of other team members. When necessary, the chief will be assisted by domain specialists e.g. business specialists, database experts, network technology experts, etc. This allows individual group members to concentrate solely on the areas in which they have sound knowledge and expertise.
The success of such a team structure relies heavily on the chief programmer. Not only must he/she be a superb technical hand, he/she also needs good managerial skills. Under a suitably qualified leader, such a team structure is known to produce successful work.
Strict hierarchy team
At the opposite extreme of an egoless team, a strict hierarchy team has a strictly defined organization among the team members, reminiscent of the military or a bureaucratic government. Each team member only works on his/her assigned tasks and reports to a single “boss”.
In a large, resource-intensive, complex project, this could be a good team structure to reduce communication overhead.
This is a printer-friendly version. It omits exercises, optional topics (i.e., four-star topics), and other extra content such as learning outcomes.
Team Structures
Given below are three commonly used team structures in software development. Irrespective of the team structure, it is a good practice to assign roles and responsibilities to different team members so that someone is clearly in charge of each aspect of the project. In comparison, the ‘everybody is responsible for everything’ approach can result in more chaos and hence slower progress.
Egoless team
In this structure, every team member is equal in terms of responsibility and accountability. When any decision is required, consensus must be reached. This team structure is also known as a democratic team structure. This team structure usually finds a good solution to a relatively hard problem as all team members contribute ideas.
However, the democratic nature of the team structure bears a higher risk of falling apart due to the absence of an authority figure to manage the team and resolve conflicts.
Chief programmer team
Frederick Brooks proposed that software engineers learn from the medical surgical team in an operating room. In such a team, there is always a chief surgeon, assisted by experts in other areas. Similarly, in a chief programmer team structure, there is a single authoritative figure, the chief programmer. Major decisions, e.g. system architecture, are made solely by him/her and obeyed by all other team members. The chief programmer directs and coordinates the effort of other team members. When necessary, the chief will be assisted by domain specialists e.g. business specialists, database experts, network technology experts, etc. This allows individual group members to concentrate solely on the areas in which they have sound knowledge and expertise.
The success of such a team structure relies heavily on the chief programmer. Not only must he/she be a superb technical hand, he/she also needs good managerial skills. Under a suitably qualified leader, such a team structure is known to produce successful work.
Strict hierarchy team
At the opposite extreme of an egoless team, a strict hierarchy team has a strictly defined organization among the team members, reminiscent of the military or a bureaucratic government. Each team member only works on his/her assigned tasks and reports to a single “boss”.
In a large, resource-intensive, complex project, this could be a good team structure to reduce communication overhead.