The Linux Scheduler: a Decade of Wasted Cores (2024)

The Linux Scheduler: a Decade of Wasted Cores – Lozi et al. 2016

This is the first in a series of papers from EuroSys 2016. There are three strands here: first of all, there’s some great background into how scheduling works in the Linux kernel; secondly, there’s a story about Software Aging and how changing requirements and maintenance can cause decay; and finally, the authors expose four bugs in Linux scheduling that caused cores to remain idle even when there was pressing work waiting to be scheduled. Hence the paper title, “A Decade of Wasted Cores.”

In our experiments, these performance bugs caused many-fold performance degradation for synchronization-heavy scientific applications, 13% higher latency for kernel make, and a 14-23% decrease in TPC-H throughput for a widely used commercial database.

The evolution of scheduling in Linux

By and large, by the year 2000, operating systems designers considered scheduling to be a solved problem… (the) year 2004 brought an end to Dennard scaling, ushered in the multicore era and made energy efficiency a top concern in the design of computer systems. These events once again made schedulers interesting, but at the same time increasingly more complicated and often broken.

Linux uses a Completely Fair Scheduling (CFS) algorithm, which is an implementation of weighted fair queueing (WFQ). Imagine a single CPU system to start with: CFS time-slices the CPU among running threads. There is a fixed time interval during which each thread in the system must run at least once. This interval is divided into timeslices that are allocated to threads according to their weights.

A thread’s weight is essentially its priority, or niceness in UNIX parlance. Threads with lower niceness have higher weights and vice versa.

A running thread accumulates vruntime (runtime / weight). When a thread’s vruntime exceeds its assigned timeslice it will be pre-empted.

Threads are organized in a runqueue, implemented as a red-black tree, in which the threads are sorted in the increasing order of their vruntime. When a CPU looks for a new thread to run it picks the leftmost node in the red-black tree, which contains the thread with the smallest vruntime.

So far so good, but now we have to talk about multi-core systems…

Firstly we need per-core runqueues so that context switches can be fast. Now we have a new problem of balancing work across multiple runqueues.

Consider a dual-core system with two runqueues that are not balanced.Suppose that one queue has one low-priority thread and another has ten high-priority threads. If each core looked for work only in its local runqueue, then high-priority threads would get a lot less CPU time than the low-priority thread, which is not what we want. We could have each core checknot only its runqueue but also the queues of other cores,but this would defeat the purpose of per-core runqueues. Therefore, what Linux and most other schedulers do is periodically run a load-balancing algorithm that will keep the queues roughly balanced.

Since load balancing is expensive the scheduler tries not to do it more often than is absolutely necessary. In addition to periodic load-balancing therefore, the scheduler can also trigger emergency load balancing when a core becomes idle. CFS balances runqueues not just based on weights , but on a metric called load, which is the combination of the thread’s weight and its average CPU utilization. To account for bias that could occur when one process has lots of threads and another has few threads, is version 2.6.38 Linux added a group scheduling (cgroup) feature.

When a thread belongs to a cgroup, its load is further divided by the total number of threads in its cgroup. This feature was later extended to automatically assign processes that belong to different ttys to different cgroups (autogroup feature).

So can we just compare the load of all the cores and transfer tasks from the most loaded to least loaded core? Unfortunately not! This would result in threads being migrated without considering cache locality or NUMA. So the load balancer uses a hierarchical strategy. Each level of the hierarchy is called a scheduling domain. At the bottom level are single cores, groupings in higher levels depend on how the machine’s physical resources are shared.

Here’s an example:

The Linux Scheduler: a Decade of Wasted Cores (1)

Load balancing is run for each scheduling domain, starting from the bottom to the top. At each level, one core of each domain is responsible for balancing the load. This core is either the first idle core of the scheduling domain, if the domain has idle cores whose free CPU cycles can be used for load balancing, or the first core of the scheduling domain otherwise. Following this, the average load is computed for each scheduling group of the scheduling domain and the busiest group is picked, based on heuristics that favor overloaded and imbalanced groups. If the busiest group’s load is lower than the local group’s load, the load is considered balanced at this level. Otherwise, the load is balanced between the local CPU and the busiest CPU of the the group, with a tweak to ensure that load-balancing works even in the presence of tasksets.

The scheduler prevents duplicating work by running the load-balancing algorithm only on the designated core for the given scheduling domain. This is the lowest numbered core in a domain if all cores are busy, or the lowest numbered idle core if one or more cores are idle. If idle cores are sleeping (power management) then the only way for them to get work is to be awoken by another core. If a core thinks it is overloaded it checks whether there have been tickless idle cores in the system for some time, and if so it wakes up the first one and asks it to run the periodic load balancing routine on behalf of itself and all of the other tickless idle cores.

Four scheduling bugs

With so many rules about when the load balancing doesor does not occur, it becomes difficult to reason about how long an idle core would remain idle if there is work to do and how long a task might stay in a runqueue waiting for its turn to run when there are idle cores in the system.

The four bugs that the authors found are the group imbalance bug, the scheduling group construction bug, the overload on wakeup bug, and the missing scheduling domains bug.

Group imbalance

Oh, the joy of averages for understanding load. I believe Gil Tene has a thing or two to say about that :).

When a core attempts to steal work from another node, or, in other words, from another scheduling group, it does not examine the load of every core in that group, it only looks at the group’s average load. If the average load of the victim scheduling group is greater than that of its own, it will attempt to steal from that group; otherwise it will not. This is the exact reason why in our situation the underloaded cores fail to steal from the overloaded cores on other nodes. They observe that the average load of the victim node’s scheduling group is not any greater than their own. The core trying to steal work runs on the same node as the high-load R thread; that thread skews up the average load for that node and conceals the fact that some cores are actually idle. At the same time, cores on the victim node, with roughly the same average load, have lots of waiting threads.

The fix was to compare minimum loads instead of the average. The minimum load is the load of the least loaded core in the group. “If the minimum load of one scheduling group is lower than the minimum load of another scheduling group, it means that the first scheduling group has a core that is less loaded than all cores in the other group, and thus a core in the first group must steal from the second group.” With the fix applied the completion time of a make/R workload decreased by 13%, and a 60 thread benchmark with four single-threaded R processes ran 13x faster.

Scheduling group construction

The Linux taskset command pins applications to run on a subset of the available cores. When an application is pinned on nodes that are two hops apart, a bug prevented the load balancing algorithm from migrating threads between them.

The bug is due to the way scheduling groups are constructed, which is not adapted to modern NUMA machines such as the one we use in our experiments. In brief, the groups are constructed from the perspective of a specific core (Core 0), whereas they should be constructed from the perspective of the core responsible for load balancing on each node.

The result is that nodes can be included in multiple scheduling groups. Suppose Nodes 1 and 2 both end up in two groups…

Suppose that an application is pinned on Nodes 1 and 2 and that all of its threads are being created on Node 1 (Linux spawns threads on the same core as their parent thread; when an application spawns multiple threads during its initialization phase, they are likely to be created on the same core – so this is what typically happens). Eventually we would like the load to be balanced between Nodes 1 and 2. However, when a core on Node 2 looks for work to steal,it will compare the load between the two scheduling groups shown earlier. Since each scheduling group contains both Nodes 1 and 2, the average loads will be the same, so Node2 will not steal any work!

The fix is to change the construction of scheduling groups. Across a range of applications, this results in speed-ups ranging from 1.3x to 27x.

Overload-on-wakeup

When a thread goes to sleep on Node X and the thread that wakes it up later is running on that same node, the scheduler only considers the cores of Node X for scheduling the awakened thread. If all cores of Node X are busy, the thread will wake up on an already busy core and miss opportunities to use idle cores on other nodes. This can lead to a significant under-utilization of the machine, especially on workloads where threads frequently wait.

The original rationale for the behaviour was to maximise cache reuse – but for some applications waiting in the runqueue for the sake of better cache reuse does not pay off. The bug was triggered by a widely used commercial database configured with 64 worker threads.

To fix this bug, we alter the code that is executed when a thread wakes up. We wake up the thread on the local core – i.e. the core where the thread was scheduled last – if it is idle; otherwise if there are idle cores in the system, we wake up the thread on the core that has been idle for the longest amount of time. If there are no idle cores, we fall back to the original algorithm to find the core where the thread will wake up.

The fix improved performance by 22.2% on the 18th query of TPC-H, and by 13.2% on the full TPC-H workload.

Missing scheduling domains

The final bug seems to been inadvertently introduced during maintenance.

When a core is disabled and then re-enabled using the /proc interface, load balancing between any NUMA nodes is no longer performed… We traced the root cause of the bug to the code that regenerates the machine’s scheduling domains. Linux regenerates scheduling domains every time a core is disabled. Regenerating the scheduling domains is a two-step process: the kernel regenerates domains inside NUMA nodes, and then across NUMA nodes. Unfortunately, the call to the function generating domains across NUMA nodes was dropped by Linux developers during code refactoring. We added it back, and doing so fixed the bug.

Before the fix, disabling and then re-enabling one core in the system could cause all threads of an application to run on a single core instead of eight. Unsurprisingly, the system performs much better (up to 138x better in one case!) with the fix.

Lessons and tools

… new scheduler designs come and go. However, a new design, even if clean and purportedly bug-free initially, is not a long-term solution. Linux is a large open-source system developed by dozens of contributors. In this environment, we will inevitably see new features and ‘hacks’ retrofitted into the source base to address evolving hardware and applications.

Is improved modularity the answer?

We now understand that rapid evolution of hardware that we are witnessing today will motivate more and more scheduler optimizations. The scheduler must be able to easily integrate them, and have a way of reasoning about how to combine them. We envision a scheduler that is a collection of modules: the core module, and optimization modules…

Catching the kind of bugs described in this paper with conventional tools is tricky – there are no crashes or out-of-memory conditions, and the lost short-term idle periods cannot be noticed with tools such as htop, sar, or perf.

Our experience motivated us to build new tools, using which we could productively confirm the bugs and understand why they occur.

The first tool is described by the authors as a sanity checker. It verifies that no core is idle while another core’s runqueue has waiting threads. It allows such a condition to exist for a short period, but raises an alert if it persists. The second tool was a visualizer showing scheduling activity over time. This makes it possible to profile and plot the size of runqueues, the total load of runqueues, and the cores that were considered during periodic load balancing and thread wakeups.

Here’s an example of a visualization produced by the tool:

The Linux Scheduler: a Decade of Wasted Cores (2)
(click for larger view).

The authors conclude:

Scheduling, as in dividing CPU cycles among threads was thought to be a solved problem. We show that this is not the case. Catering to complexities of modern hardware, a simple scheduling policy resulted in a very complex bug-prone implementation. We discovered that the Linux scheduler violates a basic work-conserving invariant: scheduling waiting threads onto idle cores. As a result, runnable threads may be stuck in runqueues for seconds while there are idle cores in the system; application performance may degrade many-fold. The nature of these bugs makes it difficult to detect them with conventional tools. We fix these bugs, understand their root causes and present tools, which make catching and fixing these bugs substantially easier. Our fixes and tools will be available at http://git.io/vaGOW.

The Linux Scheduler: a Decade of Wasted Cores (2024)

FAQs

Which scheduler is used in Linux? ›

The LINUX Kernel used the O(n) scheduler between version 2.4 and 2.6. n is the number of runnable processes in the system. O(n) scheduler divides the processor's time into a unit called epochs. Each task is allowed to use at max 1 epoch.

How does the Linux scheduler work? ›

The Linux scheduling algorithm works by dividing the CPU time into epochs . In a single epoch, every process has a specified time quantum whose duration is computed when the epoch begins. In general, different processes have different time quantum durations.

Does Linux use MLFQ? ›

The Linux 2.6 scheduler has some characteristics of a multilevel feedback queue (MLFQ). Its array of priority lists is a conventional structure, and the concept of awarding resources based on task behavior is a key element of MLFQ scheduling.

How does Linux schedule threads? ›

Scheduling of threads involves two boundary scheduling, Scheduling of user level threads (ULT) to kernel level threads (KLT) via lightweight process (LWP) by the application developer. Scheduling of kernel level threads by the system scheduler to perform different unique os functions.

What are the 3 types of scheduler? ›

Process schedulers are divided into three categories.
  • Long-Term Scheduler or Job Scheduler. The job scheduler is another name for Long-Term scheduler. ...
  • Short-Term Scheduler or CPU Scheduler. CPU scheduler is another name for Short-Term scheduler. ...
  • Medium-Term Scheduler.

What are the 3 types of scheduling? ›

The three schedule types are known as the Capacity schedule, Resource schedule, and Service schedule. In some ways, they overlap in what they can do, and for some applications more than one will work.

What is * * * * * In cron job? ›

Cron job format

A schedule is defined using the unix-cron string format ( * * * * * ) which is a set of five fields in a line, indicating when the job should be executed. You can use either the Google Cloud console, the Google Cloud CLI, or the Cloud Scheduler REST API to set your schedule.

Does Linux still use CFS? ›

Completely fair Scheduler (CFS) and Brain f*ck Scheduler (BFS) are two different process schedulers currently used in Linux.
...
1. Completely fair Scheduler (CFS) :
ProcessBurst Time (in ms)
A10
B6
C14
D6
28 Jul 2021

What are 4 major scheduling algorithms? ›

Six types of process scheduling algorithms are: First Come First Serve (FCFS), 2) Shortest-Job-First (SJF) Scheduling, 3) Shortest Remaining Time, 4) Priority Scheduling, 5) Round Robin Scheduling, 6) Multilevel Queue Scheduling.

Does Linux support SMP? ›

Synchronization basics

Because the Linux kernel supports symmetric multi-processing (SMP) it must use a set of synchronization mechanisms to achieve predictable results, free of race conditions.

Is MLFQ preemptive? ›

2.7 MLFQ Scheduling (Multi-Level Feedback Queue): Preemptive

Same as MLQ but a process can move among the various queues. Pros: o Very flexible: we can configure the MLFQ scheduler to match a specific system o Automatically classifies processes into I/O bound and CPU bound.

Does Linux use round-robin? ›

The Linux schedulers recognizes three scheduling classes: real-time round-robin and fifo classes, and a time-share other class.

Does @scheduled create a new thread? ›

@Scheduled causes the code to be run in a separate thread, not necessarily new as it might come from a thread pool.

Does Linux schedule threads or processes? ›

The Linux kernel scheduler is actually scheduling tasks, and these are either threads or (single-threaded) processes. A process is a non-empty finite set (sometimes a singleton) of threads sharing the same virtual address space (and other things like file descriptors, working directory, etc etc...).

What is Linux default scheduling policy? ›

SCHED_OTHER or SCHED_NORMAL is the default scheduling policy for Linux threads. It has a dynamic priority that is changed by the system based on the characteristics of the thread. Another thing that effects the priority of SCHED_OTHER threads is their nice value.

Which scheduler is best? ›

  • The Best Scheduling Apps of 2022.
  • Square Appointments.
  • Setmore.
  • Calendly.
  • Zoho Bookings.
  • Appointy.
  • Doodle.
  • SimplyBook.me.
12 Nov 2022

What are the 5 types of scheduling? ›

What Are the 8 Different Types of Appointment Scheduling?
  • Time-slot scheduling.
  • Wave scheduling.
  • Wave and walk-in appointment scheduling.
  • Open appointment scheduling.
  • Double scheduling.
  • Cluster scheduling.
  • Matrix scheduling.
  • 40/20 scheduling.
16 May 2022

Which scheduling is best? ›

The FCFS is better for a small burst time. The SJF is better if the process comes to processor simultaneously. The last algorithm, Round Robin, is better to adjust the average waiting time desired.

What are the 2 methods of scheduling? ›

5 Different Types of Appointment Scheduling Methods
  • Time-Slot Scheduling. Time-slot scheduling is the most common method of managing your time due to its simplicity. ...
  • Wave Scheduling. ...
  • Open Booking. ...
  • Wave and Walk-in. ...
  • Double Scheduling.
24 May 2022

What are 5 scheduling principles? ›

5 Principles of Schedule Management
  • Don't Promise What You Can't Deliver. ...
  • The Client Should Be the First to Know If the Schedule Goes Sideways. ...
  • Avoid Scope Creep. ...
  • Spread Contingency Throughout Your Project Timeline. ...
  • Pick the Right Level of Detail.
11 Jul 2017

Which scheduler speed is fastest? ›

Short Term Scheduler

CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them. Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers.

How do I run a cron job every 5 minutes? ›

To execute a crontab every 5 minutes, we need to use the slash (/) special character followed by 5, which acts as a step that crontab should execute the command after every 5 minutes. Note that the 5 is in the first field of Minutes.

How do I check my cron job logs? ›

Stop searching for logs

On Ubuntu, Debian and related distributions, you will find cron jobs logs in /var/log/syslog . Your Syslog contains entries from many operating system components and it's helpful to grep to isolate cron-specific messages. You will likely require root/sudo privileges to access your Syslog.

Is cron a daemon? ›

The cron daemon ( crond ) is a system-managed executable that runs in memory with which users may schedule tasks. The user command to work with the cron service is crontab (cron table). The crontab file is a simple text file that instructs the cron daemon to perform a task at a certain time or interval.

Why do most supercomputers use Linux? ›

Linux, as noted above, is much more lightweight than other operating systems. As a result, Linux can be configured to run on just about any hardware imaginable – everything from the fastest supercomputers in the world to a smartwatch.

Does Linux CFS avoid starvation? ›

CFS avoids thread starvation by scheduling all threads within a given time period. For a core executing fewer than 8 threads the default time period is 48ms.

Why does Linux use C instead of C++? ›

Additionally the C have stable ABI and can be easily plugged to other languages via their native FFI while C++ does not have human-readable ABI and g++ had changes in the ABI. Therefore the libraries tended to be written in C rather then C++.

What are the 6 types of scheduling systems? ›

Here are a few approaches to scheduling appointments and patient processing to help you get ahead of demand.
  • Wave scheduling. ...
  • Time slot scheduling. ...
  • Stream scheduling. ...
  • Open booking. ...
  • Clustering scheduling. ...
  • Double scheduling.

What are the two different types of scheduling algorithms in Linux? ›

Non-preemptive algorithms are designed so that once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority running process anytime when a high priority process enters into a ...

What are the 6 types of scheduling algorithms? ›

Here are a few common scheduling algorithms.
  • First Come, First Served (FCFS) Want to keep. ...
  • Shortest Job Next (SJN) For this algorithm, the OS needs to know (or guess) the time each program will take to run. ...
  • Priority Scheduling. ...
  • Shortest Remaining Time. ...
  • Round Robin (RR) scheduling. ...
  • Multilevel Queues.

Can Linux run on 1GB RAM? ›

No, Linux can't run on 1 GB RAM, Most PCs currently ship with 4GB of RAM or higher. If you need an operating system for an older machine, these Linux distros run on computers with less than 1GB.

What PS2 can run Linux? ›

Model compatibility. The original version of the PS2 Linux kit worked on only the Japanese SCPH-10000, SCPH-15000 and SCPH-18000 PlayStation 2 models.

Is WSL2 real Linux? ›

While WSL 2 runs on a real Linux kernel, if you want all of this experience, you'll need to boot up a real Linux system. Fortunately, this experience is only a partition or virtual machine away.

Can MLFQ lead to starvation? ›

The Multi-Level Feedback Queue (MLFQ) for process scheduling is efficient and effective, but can allow low-priority and CPU-intensive processes to be starved of CPU attention and make little progress.

What is the difference between MLQ and MLFQ? ›

MLFQ allows different processes to move between different queues. It prevents starvation.
...
Advantages of MLFQ:
Multilevel queue scheduling(MLQ)Multilevel feedback queue scheduling(MLFQ)
Processes are divided on the basis of their priorities or size.Processes are divided on the basis of CPU burst characteristics.
4 more rows
12 Jul 2022

Is Round-Robin always preemptive? ›

Round-robin algorithm is a pre-emptive algorithm as the scheduler forces the process out of the CPU once the time quota expires.

Is NASA using Linux? ›

As for the other computer systems in and around NASA's many missions, from the ground control to ISS laptops, Linux is used due to its stability.

Does NASA run Linux? ›

NASA mostly uses Ubuntu Linux kernel (PANASAS mainly)and some uses Unix based operating systems. Unix is quite older operating system now and most of the NASA's system hence uses Linux based operating system.

Is Linux used in Hollywood? ›

Maya, Houdini, Softimage and other popular 3D applications quickly became available for Linux. By the early 2000s, most major studios were dominated by Linux. While Windows and Mac environments are still used for television and small independent films, practically all blockbuster movies are now rendered on Linux farms.

Will thread automatically be killed? ›

A thread is automatically destroyed when the run() method has completed. But it might be required to kill/stop a thread before it has completed its life cycle.

Do waiting threads consume CPU? ›

A thread is inactive when in the blocked or waiting state. When in these states, the thread does not consume any CPU cycles. A thread is in the waiting state when it wants to wait on a signal from another thread before proceeding. Once this signal is received, it becomes runnable.

Is it faster to create a thread or a process? ›

Process takes more time for creation, whereas Thread takes less time for creation. Process likely takes more time for context switching whereas as Threads takes less time for context switching. A Process is mostly isolated, whereas Threads share memory.

Why is thread scheduling faster than process scheduling? ›

Processes require more time for context switching as they are heavier. Threads require less time for context switching as they are lighter than processes.

Does Linux use preemptive scheduling? ›

All scheduling is preemptive: If a process with a higher static priority gets ready to run, the current process will be preempted and returned to its wait list. The scheduling policy only determines the ordering within the list of runnable processes with equal static priority.

How do the threads being scheduled? ›

Threads are scheduled for execution based on their priority. Even though threads are executing within the runtime, all threads are assigned processor time slices by the operating system. The details of the scheduling algorithm used to determine the order in which threads are executed varies with each operating system.

What CPU scheduling does Linux use? ›

The LINUX Kernel used the O(n) scheduler between version 2.4 and 2.6. n is the number of runnable processes in the system. O(n) scheduler divides the processor's time into a unit called epochs. Each task is allowed to use at max 1 epoch.

What is Linux kernel scheduler? ›

Linux kernel features a preemptive scheduling, which means that a thread can stop to execute another thread before it completes as shown in Figure 3. A thread can be preempted by a pending thread that is more important (e.g., of a higher priority).

How to set scheduler in Linux? ›

In Linux, the cron daemon runs tasks in the background at specified times. To schedule a task using cron, you need to edit a special file called the crontab file in a text editor and add your task in it in a particular format. Then cron will run the task for you at the time you specify in the crontab file.

Does Linux use round robin scheduling? ›

The Linux schedulers recognizes three scheduling classes: real-time round-robin and fifo classes, and a time-share other class.

Which scheduling is used in Unix? ›

An LWP is the object that is scheduled by the UNIX system scheduler, which determines when processes run. The scheduler maintains process priorities that are based on configuration parameters, process behavior, and user requests. The scheduler uses these priorities to determine which process runs next.

Which scheduler is used in spark? ›

Spark includes a fair scheduler to schedule resources within each SparkContext.

How to check CPU usage in Linux? ›

You can check how your CPU is being used with the htop command. This prints out real-time information that includes tasks, threads, load average uptime and usage for each CPU. You should see a real-time display with information on how your CPU is being put to use.

What does 30 * * * * mean in crontab? ›

*/30 * * * * your_command. this means "run when the minute of each hour is evenly divisible by 30" (would run at: 1:30, 2:00, 2:30, 3:00, etc) example #3. 0,30 * * * * your_command. this means "run when the minute of each hour is 0 or 30" (would run at: 1:30, 2:00, 2:30, 3:00, etc)

Is Linux good for automation? ›

Security and reliability. Openness, configurability, and flexibility aren't the only reasons Linux is the best operating system in industrial automation and robotics. It is also a matter of security and reliability.

Does Linux use FIFO? ›

Under Linux, opening a FIFO for read and write will succeed both in blocking and nonblocking mode. POSIX leaves this behavior undefined. This can be used to open a FIFO for writing while there are no readers available.

Which scheduler is fastest speed? ›

Short Term Scheduler

CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them. Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers.

Is scheduler part of kernel? ›

The task scheduler, sometimes called process scheduler, is the part of the kernel that decides which task to run next. It is responsible for best using system resources to guarantee that multiple tasks are being executed simultaneously. This makes it a core component of any multitasking operating system.

Top Articles
Latest Posts
Article information

Author: Greg O'Connell

Last Updated:

Views: 5829

Rating: 4.1 / 5 (62 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.