CS250/280A1: |
[Schedule] | [Assignments] | [Infospaces] | [Grading] | [Syllabus] | [Home] |
Last updated on Thursday, May 4, 2023 8:30 AM | |||||||||||||||||||||||||||||||||
Instructor | |||||||||||||||||||||||||||||||||
Ariana Mims |
|
I will only test on meterials that I cover in class. There is no required text for thes course.
Readings will be based on the following recommended texts.
[NS] | The Elements of Computing Systems, second edition: Building a Modern Computer from First Principles. 2nd Edition. Noam Nisan and Shimon Schocken. ISBN-10/ ISBN-13: 0262539802 / 978-0262539807. MIT Press. |
[MJ] | How Computers Really Work: A Hands-On Guide to the Inner Workings of the Machine. Matthew Justice. ISBN-10/ISBN-13 : 1718500661 / 978-1718500662. No Starch Press. |
[JS] | The Secret Life of Programs: Understand Computers -- Craft Better Code. Jonathan E. Steinhart. ISBN-10/ ISBN-13 : 1593279701 / 978-1593279707. No Starch Press. |
[DT] | Peter J. Denning and Matti Tedre. Computational Thinking. Essential Knowledge series. The MIT Press. ISBN-10:ISBN-13 0262536560/ 978-0262536561. 2019. [Chapter 2] |
[RH] | Randall Hyde. Write Great Code, Volume 1, 2nd Edition: Understanding the Machine 2nd Edition. ASIN: B07VSC1K8Z. No Starch Press. 2020.
|
[RN] | Crafting Interpreters. Robert Nystrom. ISBN-10/ ISBN-13 : 0990582930 / 978-0990582939. Genever Benning. |
[PD] | Computer Networks: A Systems Approach. Larry Peterson and Bruce Davie. 4th edition. Morgan Kaufmann. ISBN: 978-0-12-370548-8. |
[MK] | Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems. Martin Kleppmann. ISBN-10/ ISBN-13 : 1449373321 / 978-1449373320. O'Reilly Media |
[PH] | Computer Organization and Design MIPS Edition: The Hardware/Software Interface. 5th Edition. David A. Patterson and John L. Hennessy. ISBN-10/ ISBN-13 0124077269/ 978-0124077263. Morgan Kaufmann. |
[RR] | Unix Systems Programming. Kay Robbins & Steve Robbins, 2nd edition. Prentice Hall. ISBN: 978-0-13-042411-2. |
[SGG] | Operating Systems Concepts. Avi Silberschatz, Peter Galvin, Greg Gagne. 8th edition. John Wiley & Sons, Inc. ISBN-13: 978-0-470-12872-5. |
[AP] | Alex Petrov. Database Internals. ISBN-10/13: 1492040347/978-1492040347 O’Reilly Media.
|
[SC] | Shane Cook. CUDA Programming: A Developer's Guide to Parallel Computing with GPUs (Applications of GPU Computing). ISBN-10/ISBN-13: 0124159338/978-0124159334. 1st Edition. Morgan Kaufmann. |
Introduction | References and HW | ||
This module introduces students to why computer systems thinking is critical in the design of large-scale systems. Computer systems allow viewing the problem space from multiple vantage points. Such vantage points allow designers to come up with creative solutions to problems. | [NS] Ch {1} |
||
Objectives: |
|||
08/22 |
Lecture 1 |
||
Binary number representations | Readings | ||
The realm of computer systems is based on binary i.e., all operations are based on 1s and 0s. We will look at the rationale behind this decision and the efficiencies that this choice enables. We will look at properties of binary numbers and how to perform conversions between decimal and binary number representations. |
[DT] Ch {2} [MJ] Ch {1} [NS] Ch {1} [JS] Ch {1} [RH] Ch {1} |
||
Objectives: [1] Discuss rationale for binary number representations [2] Explain properties of binary numbers [3] Perform calculations that convert from binary to decimal and vice versa |
|||
08/27 08/29 |
Lecture 2 Lecture 3 |
HW1 released (1/24) | |
Signed numbers and floating-point representations | Readings | ||
Working with positive and negative numbers and exponents comes very naturally to us when we are dealing with decimal numbers. Turns out doing this in binary is not that straightforward. There are no “signs” or “powers” that are built into binary. We need to work with 1s and 0s to accomplish all our objectives. In this module we will be looking at approaches to represent numbers that are positive, negative, gargantuan, or miniscule. | [NS] Ch {2} [JS] Ch {1} [MJ] Ch {1} |
||
Objectives:
|
|||
09/03 |
Lecture 4 |
||
Boolean Logic and Algebra | |||
Boolean logic and algebra are foundational constructs that underpin computer systems. Boolean algebra is what regular algebra is to decimal numbers. We will look at how at how we can derive truth tables from Boolean functions, and how we can synthesize Boolean functions from a given truth table. We will look at several elementary gates and their corresponding truth tables. Finally, we will prove how all Boolean functions can be constructed using nothing more than NAND gates. | [NS] Ch {1,2, Appendix-A} [JS] Ch {1} |
||
Objectives:
|
|||
09/05 09/10 09/12 |
Lecture 5 Lecture 6 Lecture 7 |
||
Memory Basics | [NS] Ch {3} [JS] Ch {3} |
||
When we write programs, we leverage elements such as variables, arrays, and data structures that are used to store, modify, and retrieve values. All of these occur in memory. We will look at how state can be stored, retrieved, and modified. We will look at how we model progression of time in circuits using something called clocks. | |||
Objectives:
|
|||
09/17 |
Lecture 8 |
||
Memory Hierarchy | |||
How computer systems manage and leverage speed differentials across the memory hierarchy has a tremendous impact on the performance. We will look at registers, L1/L2/L3 caches, and main memory working in concert to ensure effective access times. We will look at the design of caches as well; in particular, we will look at direct-mapped caches, fully associative caches, and N-way associative caches. | [JS] Ch {4} [RH] Ch {11} [NS] Ch {3} [SC] Ch {2,3} HW2 released (2/13) |
||
Objectives:
|
|||
09/19 09/24 |
Lecture 9 Lecture 10 |
||
Mid-Term I: 09/26 |
|||
The von Neumann Computing architecture | |||
Computers designed over the past 70 years or so are all based on the von Neumann architecture. In this module, we will look at several elements that comprise this architecture. Also included in this module is the notion of the stored program concept – one of the most foundational ideas in computer science. The module will also cover aspects relating to bus architectures, Moore’s law, and Dennard’s scaling. A key concept that we also look at in this module is Memory-Mapped I/O which allows a computer system to interoperate with a plethora of Input/Output (I/O) devices. |
|||
Objectives:
|
|||
10/01 10/03 |
Lecture 12 Lecture 13 |
||
Machine Language | |||
Programs expressed in high-level languages need to be converted to a representation that the computer system can execute. This includes translating the semantics of higher-level languages into those that the computer system can process. We will be looking at characteristics of machine languages both binary and symbolic. We will also be looking at the notion of Instruction Set Architectures (ISAs) and we will be looking at different generations of the x86 ISA and ARM. |
|||
Objectives:
|
|||
10/08 |
Lecture 14 |
||
Software | |||
In this module we will be looking at how programs expressed in high-level languages are processed. In particular, we will be looking at the notion of a virtual machine (used in languages such as Java and C#) and stack machines. We will also be looking at the notion of heaps, stacks, and stack frames that play a crucial role in program execution and garbage collection. | |||
Objectives:
|
|||
10/10 10/15 |
Lecture 15 Lecture 16 |
||
Networking Concepts | |||
In this module, we will be discussing some of the foundational constructs in communications. This includes issues such how data are encoded prior to transmission. We explore the two different network types (circuit switched and packet switched) and how they enable multiplexing (or sharing) of the networking architecture. We also explore a key issue of encapsulation that plays a foundational role in networked communications. |
[PD] Ch {1, 2} | ||
Objectives:
|
|||
10/17 10/22 |
Lecture 17 Lecture 18 |
||
Internet Architecture | |||
This module is the centerpiece of our discussions in networking. We will be looking at the Internet Protocol stack. We will be explored the core layered protocol stack. Our discussions will look at the core protocol, Internet Protocol, underpinning the internet. Our explorations cover both IPv4 (the original, and currently dominant) and IPv6 (the next generation) versions of the protocol. We will look at reliable (TCP or the Transmission Control Protocol) and unreliable communications (UDP or the User Datagram Protocol). |
[PD] Ch {1-5} HW3 (Due 11/13) |
||
Objectives:
|
|||
10/24 10/29 10/31 |
Lecture 19 Lecture 20 Lecture 21 |
||
Mid Term II (11/05): Covers all topics covered between Midterm-II and up until the lecture on Tuesday. |
|||
Networking Extras | |||
In this module we look at additional critical concepts in networking. In particular, we look at issues such as how ISPs cope with the limited availability of unique IPv4 addresses, how several of these design decisions also help address security issues, and how hosts are able to discover hosts over the internet. Our discussions in this module encompass the interrelated issues of private IP addresses, network address translation, and the domain name services. | |||
Objectives:
|
|||
11/07 |
Lecture 23 |
||
Data Structures for Storage Systems | |||
In this module, we look at a key data structure for one of the most important I/O device: stable storage. We describe the key differences between in-memory and on-disk data structures. To ensure that our discussions are self-contained we first start with linear scan that we then improve by pruning search spaces using binary search. We discuss the notion of dynamic data structures that adapt to contents of the data items. Next, we cover a popular in-memory data structure, Binary Search Trees (BSTs). We then cover one of the most foundational data structures for storage systems: B-Trees. In particular, B-trees are data structures that are designed to align with how data access and transfers (block-based) take place in storage systems. We highlight both the similarities and key differences between BSTs and B-Trees. Issues such as insertions, deletions, merges, and balancing in B-Trees are discussed as well. We complete our discussions on B-Trees with variants such as B*-trees and B+-Trees. |
HW4 (Due: 5/3) | ||
Objectives:
|
|||
11/12 11/12 11/14 11/19 |
Lecture 24 Lecture 25 Lecture 26 Lecture 27 |
||
Graphics processing units (GPUs) | |||
Graphics processing units (GPUs) are used extensively in gaming and AI/ML (Artificial Intelligence/Machine Learning) applications. In this module, we will explore GPS and contrast them with CPU based systems. A key emphasis here is to identify the specific types of problems that GPUs are particularly well-suited for. We will explore the issue of memory locality and cache coherency in both GPU and CPU based systems. Notably, CPUs and GPUs work in tandem with each other and workloads that effectively target them will see considerably improved performance. |
|||
Objectives:
|
|||
11/21 12/03 12/05 |
Lecture 28 Lecture 29 Lecture 30 |
||
Comprehensive Final Exam Tuesday December 10th 8:30-10:30pm |
|||
Department of Computer Science, Colorado State University, Fort Collins, CO 80523 USA © 2023 Colorado State University |