CS250/280A1:
Foundations of Computer Systems

red-line
[Schedule] [Assignments] [Infospaces] [Grading] [Syllabus]

[Canvas]

[Home]

Schedule

Last updated on Thursday, May 4, 2023 8:30 AM
Instructor  
 

Ariana Mims
Office: Room 450, Computer Science
Office Hours:
9:00 - 10:00 am Tuesday
E-mail: compsci_cs250 {aT} colostate.edu
(with the obvious change)
Tel:



Lecture Coordinates
TTH: 5:00 - 6:15 pm
Scott Bioengineering 101

 


Labs:/Recitations

  R1 Wed 9:00-10:50 AM CSB-315
  R2 Wed 11:00-11:50 AM CSB-315
  R3 Wed 12:00-12:50 PM CSB-315
  R4 Wed 1:00-1:50 PM CSB-315
  R5 Wed 2:00-2:50 PM CSB-315
  R6 Wed 3:00-3:50 PM CSB-315








Teaching Assistant
s
Office Hours in CSB 120 and Teams
E-mail: compsci_cs250{aT} colostate.edu

Graduate Teaching Assistants <
Yunik Tamrakar

Dennis Kim

Phil Hopkins

Santoshkumar Tongli


Undergraduate Teaching Assistants
Omar Soliman

Emily Cosgriff

Sanjar Yuldashev

Kushal Alimineti

I will only test on meterials that I cover in class. There is no required text for thes course.
Readings will be based on the following recommended texts.

[NS] The Elements of Computing Systems, second edition: Building a Modern Computer from First Principles. 2nd Edition. Noam Nisan and Shimon Schocken. ISBN-10/ ISBN-13: ‎ 0262539802 /‎ 978-0262539807. MIT Press.
[MJ]
How Computers Really Work: A Hands-On Guide to the Inner Workings of the Machine. Matthew Justice. ISBN-10/ISBN-13 ‏ : ‎ 1718500661 / ‎ 978-1718500662. No Starch Press.
[JS] The Secret Life of Programs: Understand Computers -- Craft Better Code. Jonathan E. Steinhart.
ISBN-10/ ISBN-13‏ : ‎ 1593279701 / ‎ 978-1593279707. No Starch Press.
[DT]
Peter J. Denning and Matti Tedre. Computational Thinking. Essential Knowledge series. The MIT Press. ISBN-10:ISBN-13 0262536560/ 978-0262536561. 2019. [Chapter 2]
[RH]
Randall Hyde. Write Great Code, Volume 1, 2nd Edition: Understanding the Machine 2nd Edition. ASIN: B07VSC1K8Z. No Starch Press. 2020.
[RN] Crafting Interpreters. Robert Nystrom. ISBN-10/ ISBN-13   ‏ : ‎ 0990582930 /‎ 978-0990582939.
Genever Benning.
[PD] Computer Networks: A Systems Approach. Larry Peterson and Bruce Davie. 4th edition. Morgan Kaufmann. ISBN: 978-0-12-370548-8.
[MK] Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems. Martin Kleppmann. ISBN-10/ ISBN-13  ‏: ‎ 1449373321 / ‎ 978-1449373320. O'Reilly Media
[PH] Computer Organization and Design MIPS Edition: The Hardware/Software Interface. 5th Edition. David A. Patterson and John L. Hennessy.  ISBN-10/ ISBN-13  0124077269/ 978-0124077263. Morgan Kaufmann.
[RR] Unix Systems Programming. Kay Robbins & Steve Robbins, 2nd edition. Prentice Hall. ISBN: 978-0-13-042411-2.
[SGG] Operating Systems Concepts. Avi Silberschatz, Peter Galvin, Greg Gagne. 8th edition. John Wiley & Sons, Inc. ISBN-13: 978-0-470-12872-5.
[AP]
Alex Petrov. Database Internals. ISBN-10/13: 1492040347/978-1492040347 O’Reilly Media.
[SC] Shane Cook. CUDA Programming: A Developer's Guide to Parallel Computing with GPUs (Applications of GPU Computing). ISBN-10/ISBN-13: ‎0124159338/978-0124159334. 1st Edition. Morgan Kaufmann. 


   
Introduction References and HW
This module introduces students to why computer systems thinking is critical in the design of large-scale systems. Computer systems allow viewing the problem space from multiple vantage points. Such vantage points allow designers to come up with creative solutions to problems. [NS] Ch {1}


 

Objectives:
[1] Summarize basic computer systems concepts
[2] Identify key issues in computer systems

 
08/22



Lecture 1

   
Binary number representations Readings

The realm of computer systems is based on binary i.e., all operations are based on 1s and 0s. We will look at the rationale behind this decision and the efficiencies that this choice enables. We will look at properties of binary numbers and how to perform conversions between decimal and binary number representations.

[DT] Ch {2}
[MJ] Ch {1}
[NS] Ch {1}
[JS] Ch {1}
[RH] Ch {1}
  Objectives:
[1] Discuss rationale for binary number representations
[2] Explain properties of binary numbers
[3] Perform calculations that convert from binary to decimal and vice versa

 
08/27

08/29




Lecture 2


Lecture 3
HW1 released (1/24)
   
Signed numbers and floating-point representations Readings
Working with positive and negative numbers and exponents comes very naturally to us when we are dealing with decimal numbers. Turns out doing this in binary is not that straightforward. There are no “signs” or “powers” that are built into binary. We need to work with 1s and 0s to accomplish all our objectives. In this module we will be looking at approaches to represent numbers that are positive, negative, gargantuan, or miniscule. [NS] Ch {2}
[JS] Ch {1}
[MJ] Ch {1}
  Objectives:
  1. Discuss the need for signed number representations
  2. Explain and compute Two’s and One’s complement number representations of negative numbers.
  3. Identify challenges in representation of gargantuan/miniscule numbers using binary
  4. Explain the single-precision and double-precision floating point representations that can be used to represent both gargantuan and miniscule numbers.

 
09/03



Lecture 4
   
Boolean Logic and Algebra  
Boolean logic and algebra are foundational constructs that underpin computer systems. Boolean algebra is what regular algebra is to decimal numbers. We will look at how at how we can derive truth tables from Boolean functions, and how we can synthesize Boolean functions from a given truth table. We will look at several elementary gates and their corresponding truth tables. Finally, we will prove how all Boolean functions can be constructed using nothing more than NAND gates. [NS] Ch {1,2, Appendix-A}
[JS] Ch {1}
 

Objectives:

  1. Derive truth tables from Boolean functions
  2. Explain and apply De Morgan’s Laws to transform Boolean expressions
  3. Explain elementary gates: Or, And, Not, Xor, Nand, and Nor
  4. Synthesize Boolean functions from Truth Tables
  5. Prove how all Boolean functions can be constructed using only NAND gates
 

09/05

09/10

09/12


Lecture 5


Lecture 6

Lecture 7


   
Memory Basics [NS] Ch {3}
[JS] Ch {3}
When we write programs, we leverage elements such as variables, arrays, and data structures that are used to store, modify, and retrieve values. All of these occur in memory. We will look at how state can be stored, retrieved, and modified. We will look at how we model progression of time in circuits using something called clocks.
 

Objectives:

  1. Explain how progression of time is modeled and its implication on storage of state
  2. Identify and explain differences in Combinational and Sequential logic
  3. Describe the data flip flop
 

09/17



Lecture 8

   
Memory Hierarchy  
How computer systems manage and leverage speed differentials across the memory hierarchy has a tremendous impact on the performance. We will look at registers, L1/L2/L3 caches, and main memory working in concert to ensure effective access times. We will look at the design of caches as well; in particular, we will look at direct-mapped caches, fully associative caches, and N-way associative caches.
[JS] Ch {4}
[RH] Ch {11}
[NS] Ch {3}
[SC] Ch {2,3}

HW2 released (2/13)
 

Objectives:

  1. Explain organization of the memory hierarchy encompassing registers, L1/L2/L3 caches, and main memory.
  2. Describe memory addressing schemes and how word-aligned memory accesses occur
  3. Synthesize concepts of cache lines and differences in direct-mapped and associative caching schemes
  4. Design programs that identify and profile the impact of different caching based either on temporal or spatial locality
 
09/19

09/24

Lecture 9

Lecture 10


   

Mid-Term I: 09/26



 
   
The von Neumann Computing architecture  

Computers designed over the past 70 years or so are all based on the von Neumann architecture. In this module, we will look at several elements that comprise this architecture. Also included in this module is the notion of the stored program concept – one of the most foundational ideas in computer science. The module will also cover aspects relating to bus architectures, Moore’s law, and Dennard’s scaling. A key concept that we also look at in this module is Memory-Mapped I/O which allows a computer system to interoperate with a plethora of Input/Output (I/O) devices.

 
 

Objectives:

  1. Explain the stored program concept
  2. Identify different constructs within the von Neumann architecture and how they have influenced design of computer systems
  3. Explain how memory mapped I/O allows the computer system to reconcile diverse I/O devices
  4. Explain differences between the Harvard architecture and the von Neumann architecture
  5. Explain the role buses play in the von Neumann architecture.
 
10/01

10/03


Lecture 12

Lecture 13


   
Machine Language  

Programs expressed in high-level languages need to be converted to a representation that the computer system can execute. This includes translating the semantics of higher-level languages into those that the computer system can process. We will be looking at characteristics of machine languages both binary and symbolic. We will also be looking at the notion of Instruction Set Architectures (ISAs) and we will be looking at different generations of the x86 ISA and ARM.

 
 

Objectives:

  1. Explain difference between binary and symbolic versions of machine language.
  2. Identify the role of ISA as a model for how computations are performed.
  3. Explain evolution of the x86 ISAs over time.

 
10/08


Lecture 14

   
Software  
In this module we will be looking at how programs expressed in high-level languages are processed. In particular, we will be looking at the notion of a virtual machine (used in languages such as Java and C#) and stack machines.  We will also be looking at the notion of heaps, stacks, and stack frames that play a crucial role in program execution and garbage collection.  
 

Objectives:

  1. Explain the two-tiered compilation model in virtual machine environments
  2. Explain the role (and interactions between) stacks, heaps, and stack frames
  3. Describe the stack machine
 
10/10

10/15


Lecture 15


Lecture 16
   
Networking Concepts  

In this module, we will be discussing some of the foundational constructs in communications. This includes issues such how data are encoded prior to transmission. We explore the two different network types (circuit switched and packet switched) and how they enable multiplexing (or sharing) of the networking architecture.  We also explore a key issue of encapsulation that plays a foundational role in networked communications.

[PD] Ch {1, 2}
 

Objectives:

  1. Describe data encoding and representation mechanisms in communications
  2. Explain differences in circuit switched networks and packet switched networks
  3. Identify how multiplexing of data can occur over shared networks
  4. Understand organization principle of protocol layers and encapsulation
 
10/17

10/22

Lecture 17

Lecture 18

   
Internet Architecture  

This module is the centerpiece of our discussions in networking. We will be looking at the Internet Protocol stack. We will be explored the core layered protocol stack. Our discussions will look at the core protocol, Internet Protocol, underpinning the internet. Our explorations cover both IPv4 (the original, and currently dominant) and IPv6 (the next generation) versions of the protocol. We will look at reliable (TCP or the Transmission Control Protocol) and unreliable communications (UDP or the User Datagram Protocol).


[PD] Ch {1-5}

HW3 (Due 11/13)
  Objectives:
  1. Identify the rationale for layering and the role it plays in encapsulation
  2. Explain fragmentation and reassembly in IP networks
  3. Explain the inner workings of IPv4 and IPv6 alongside key architectural differences that exist between them
  4. Identify key differences in connection-oriented and connectionless communications
  5. Explain key concepts underpinning TCP such as congestion control, flow control, and the sliding window protocol including setup and teardown of TCP connections.
  6. Design and develop programs that leverage networked communications.
 
10/24

10/29

10/31



Lecture 19

Lecture 20

Lecture 21


   

Mid Term II (11/05): Covers all topics covered between Midterm-II and up until the lecture on Tuesday.


 
   
Networking Extras  
In this module we look at additional critical concepts in networking. In particular, we look at issues such as how ISPs cope with the limited availability of unique IPv4 addresses, how several of these design decisions also help address security issues, and how hosts are able to discover hosts over the internet. Our discussions in this module encompass the interrelated issues of private IP addresses, network address translation, and the domain name services.  
  Objectives:
  1. Describe how network address translations (NATs) work
  2. Highlight the need and rationale for private addresses
  3. Distill core ideas in DNS and how network addresses are resolved
 
11/07



Lecture 23
   
Data Structures for Storage Systems  

In this module, we look at a key data structure for one of the most important I/O device: stable storage. We describe the key differences between in-memory and on-disk data structures. To ensure that our discussions are self-contained we first start with linear scan that we then improve by pruning search spaces using binary search. We discuss the notion of dynamic data structures that adapt to contents of the data items. Next, we cover a popular in-memory data structure, Binary Search Trees (BSTs). We then cover one of the most foundational data structures for storage systems: B-Trees. In particular, B-trees are data structures that are designed to align with how data access and transfers (block-based) take place in storage systems. We highlight both the similarities and key differences between BSTs and B-Trees. Issues such as insertions, deletions, merges, and balancing in B-Trees are discussed as well. We complete our discussions on B-Trees with variants such as B*-trees and B+-Trees.


HW4 (Due: 5/3)
 

Objectives:

  1. Identify how binary search improves upon linear scan
  2. Explain key concepts underlying dynamic data structures
  3. Design and develop programs that implement BSTs including insertions, deletions, and search
  4. Design and develop programs that implement B-Trees including insertions, deletions, and search

 
11/12

11/12

11/14

11/19


Lecture 24

Lecture 25

Lecture 26

Lecture 27
   
Graphics processing units (GPUs)  

Graphics processing units (GPUs) are used extensively in gaming and AI/ML (Artificial Intelligence/Machine Learning) applications. In this module, we will explore GPS and contrast them with CPU based systems. A key emphasis here is to identify the specific types of problems that GPUs are particularly well-suited for. We will explore the issue of memory locality and cache coherency in both GPU and CPU based systems. Notably, CPUs and GPUs work in tandem with each other and workloads that effectively target them will see considerably improved performance.

 
 

Objectives:

  1. Explain the role of the GPU in the computing ecosystem
  2. Identify how the GPU differs from (and complements) the CPU
 
11/21

12/03

12/05


Lecture 28

Lecture 29

Lecture 30
   

Comprehensive Final Exam Tuesday December 10th 8:30-10:30pm



 
   


 

 


Department of Computer Science, Colorado State University,
Fort Collins, CO 80523 USA
© 2023 Colorado State University