Academic Positions

  • Present Mar2017

    Senior Research Fellow

    Durham University, School of Engineering and Computing Sciences

  • Feb2017 Mar2014

    Research Associate

    Durham University, Institute of Advanced Research Computing

  • Feb2014 Mar2010

    P/T Lecturer

    University of Huddersfield, School of Computing and Engineering

  • Dec2013 Sep2009

    Senior System Administrator

    University of Huddersfield, HPC Resource Centre

  • Mar2010 Feb2010

    P/T Lecturer

    University Centre Blackburn College, VET Program

Education & Training

  • Ph.D. 2015

    Ph.D. High Performance Computing

    University of Huddersfield
    HPC Research Group

  • MSc (Res)2011

    Master by Research in High Performance Computing

    University of Huddersfield
    HPC Research Group

  • BEng (HONS)2009

    Bachelor of Engineering in Electronic Engineering and Computer Science

    University of Huddersfield

Research Projects

  • image

    Modelling HPC Systems leading to Green Datacentres’

    Active Very short description of the project.

    Working within Tectre Enterprise Solutions to devise a software to model datacentre components for HPC systems. Information such as power and cooling requirements etc. will generate designs and reports to better inform purchase and provisioning decisions.

  • image

    Moonshot Integration of HPC Resources

    Active Very short description of the project.

    The University of Huddersfield is an early adopter of the Moonshot project to link HPC systems with existing RADIUS servers that serve EDUROAM.

  • image

    Data Processing Using Map-Reduce Technologies

    Proposed Cycle stealing for HADOOP.

    Developing an in-house map-reduce system to run on existing Cycle Stealing Condor Pool.

    Project aims to avoid the need for dedicated Hadoop Resources and reduce dependence on commercial providers (e.g. Amazon).

  • image

    Scavenging Storage Space in a Condor Network using Error Correcting Codes

    Proposed Big Network for Big Data

    Large Condor pools with 500-1000GB local hard drives are prime sources for storage solutions. While duplication has its flaws the use of error correcting code (like those used in network communication) are being implemented to harness this dormant resource.

  • image

    Real-time Remote Visualization

    Proposed Live CAD and system visualisation

    Developing mechanisms for real time processing and visualization from X-Ray Tomography and Particle Velocimetry devices (to name a few) through a HPC/GPGPU System to a purpose build 3D Visualization suite.

  • image

    Hybrid HPC Systems

    Completed Windows, Linux Dual Boot Cluster

    Created a mechanism for different HPC managers on different platforms to communicate with each other and share compute end points. A working system was deployed and used for two years. System included a Windows HPC 2008 and a Linux/Torque HPC head node sharing 64 compute nodes

    The system was later enhanced to use virtualised environments on the compute nodes.

  • image

    Expanding HPC Resources Using a ‘Green’ Condor

    Completed Green supercomputing using Condor and Power management.

    Investigated methods of deploying cycle stealing software to harness idle campus resources. Integrated the system with power management features to ensure IT carbon reduction targets were still met.

    Added the use of virtualisation to provide different run time environments

  • image

    Tools for Automated Job Submission of Mechanical Engineering Applications

    Completed Grid Enabled Java front ends for CFD and FEA packages.

    Designed portals and plugins to work with popular mechanical engineering software packages (e.g. Fluent). This made adoption of HPC technologies easier for undergraduates and hid the complexities of UNIX clusters from the average Mechanical Engineering users.

  • image

    3D Rendering on Hybrid Systems

    Completed Grid Enabled web front end for CAD packages.

    Designed a portal system to allow Art students to render their 3D sequences on a cycle stealing render farm and the dedicated on campus HPC systems.

  • image

    Investigate Grid Middleware’s for Campus Deployment

    Completed Condor-G, gLite and Globus at Campus Grid Level

    Using grid technologies typically used for National and International connectivity a grid system was deployed at the University of Huddersfield. This linked the Universities disparate resources and allowed users to scale beyond on-campus resources.

  • image

    Developing Scalable Software to Evaluate Automotive Fuel Efficiency

    Completed Profiling vehicle telemetry against known models of fuel consumptions.

    Designed a software solution with mechanical engineers to assess vehicular telemetry data, identifying those portions that matched to New European Driving Cycle (NEDC) standards. The final software took advantage of multicore processors, traditional HPC and HTC systems.

  • image

    Deployment of a private IaaS Cloud

    Completed Private cloud for H.E. Teaching and Research Activities

    Investigated, deployed and maintained an Infrastructure as a Service private cloud at the University of Huddersfield. Worked with Software Design teams and faculty members from various disciplines to build Platform and Software services.

  • image

    Using Agents to Improve existing Grid Middleware

    Completed Intelligent Grids

    Working with the Systems Research Institute at the Polish Academy of Science to integrate agents in the form of the AiG system with existing grid middleware like gLite and Unicore.

  • image

    Establishing a Trusted Grid

    Completed Private Grids: research computing systems integration

    Based on prior experience of linking Job Schedulers, this project aims to link TORQUE, Condor and LSF to allow users a single point of submission with a single job description language. This trusted grid also allows for surging within the established private cloud.

  • image

    Using HADOOP for Error Correction in NHS Surgical Record

    Completed Optimising big data problems and investigating solutions.

    Working with the University of Glasgow on converting an existing SPARQL/RDF tool for detecting errors in National Health Service Surgical Records to HADOOP. The aim is to reduce the computation time required and make the software scalable. This project was started due to involvement in the National Grid Service and the Software Sustainability Institutes Collaboration Workshop held in Oxford in 2012.

Filter by type:

Sort by year:

PBStoHTCondor System for Campus Grids

Violeta Holmes, John Brennan, Ibad Kureshi, Stephen Bonner
Conference Papers Proceedings of the 2015 Science and Information Conference

Abstract

The campus grid architectures currently available are considered to be overly complex. We have focused on High Throughput Condor HTCondor as one of the most popular middlewares among UK universities, and are proposing a new system for unifying campus grid resources. This new system PBStoCondor is capable of interfacing with Linux based system within the campus grids, and automatically determining the best resource for a given job. The system does not require additional efforts from users and administrators of the campus grid resources. We have compared the real usage data and PBStoCondor system simulation data. The results show a close match. The proposed system will enable better utilization of campus grid resources, and will not require modification in users’ workflows.

Developing High Performance Computing Resources for Teaching Cluster and Grid Computing courses

Violeta Holmes, Ibad Kureshi
Conference Papers Proceedings of the 2015 BRIDGE Workshop at INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE

Abstract

High-Performance Computing (HPC) and the ability to process large amounts of data are of paramount importance for UK business and economy as outlined by Rt Hon David Willetts MP at the HPC and Big Data conference in February 2014. However there is a shortage of skills and available training in HPC to prepare and expand the workforce for the HPC and Big Data research and development. Currently, HPC skills are acquired mainly by students and staff taking part in HPC-related research projects, MSc courses, and at the dedicated training centres such as Edinburgh University’s EPCC. There are few UK universities teaching the HPC, Clusters and Grid Computing courses at the undergraduate level. To address the issue of skills shortages in the HPC it is essential to provide teaching and training as part of both postgraduate and undergraduate courses. The design and development of such courses is challenging since the technologies and software in the fields of large scale distributed systems such as Cluster, Cloud and Grid computing are undergoing continuous change. The students completing the HPC courses should be proficient in these evolving technologies and equipped with practical and theoretical skills for future jobs in this fast developing area. In this paper we present our experience in developing the HPC, Cluster and Grid modules including a review of existing HPC courses offered at the UK universities. The topics covered in the modules are described, as well as the coursework project based on practical laboratory work. We conclude with an evaluation based on our experience over the last ten years in developing and delivering the HPC modules on the undergraduate courses, with suggestions for future work.

Using Hadoop To Implement a Semantic Method Of Assessing The Quality Of Research Medical Datasets

Stephen Bonner, Grigoris Antoniou, Laura Moss, Ibad Kureshi, David Corsair, Illias Tachmazidis
Conference Papers Proceedings of the 2014 International Conference on Big Data Science and Computing

Abstract

In this paper a system for storing and querying medical RDF data using Hadoop is developed. This approach enables us to create an inherently parallel framework that will scale the workload across a cluster. Unlike existing solutions, our framework uses highly optimised joining strategies to enable the completion of eight separate SPAQL queries, comprised of over eighty distinct joins, in only two Map/Reduce iterations. Results are presented comparing an optimised version of our solution against Jena TDB, demonstrating the superior performance of our system and its viability for assessing the quality of medical data.

CDES: An Approach to HPC Workload Modelling

John Brennan, Violeta Holmes, Ibad Kureshi
Conference Papers 18th International Symposium on Distributed Simulation and Real Time Applications

Abstract

Computational science and complex system administration relies on being able to model user interactions. When it comes to managing HPC, HTC and grid systems user workloads - their job submission behaviour, is an important metric when designing systems or scheduling algorithms. Most simulators are either inflexible or tied in to proprietary scheduling systems. For system administrators being able to model how a scheduling algorithm behaves or how modifying system configurations can affect the job completion rates is critical. Within computer science research many algorithms are presented with no real description or verification of behaviour. In this paper we are presenting the Cluster Discrete Event Simulator (CDES) as an strong candidate for HPC workload simulation. Built around an open framework, CDES can take system definitions, multi-platform real usage logs and can be interfaced with any scheduling algorithm through the use of an API. CDES has been tested against 3 years of usage logs from a production level HPC system and verified to a greater than 95% accuracy.

Robust Moldable Scheduling Using Application Benchmarking for Elastic Environments

Ibad Kureshi, Violeta Holmes, David Cooke
Conference Papers 5th Balkan Conference in Informatics. CEUR workshop proceedings, September 2012, Vol 920 pp 51-57

Abstract

In this paper we present a framework for developing an intelligent job management and scheduling system that utilizes application specific benchmarks to mould jobs onto available resources. In an attempt to achieve the seemingly irreconcilable goals of maximum usage and minimum turnaround time this research aims to adapt an open-framework benchmarking scheme to supply information to a mouldable job scheduler. In a green IT obsessed world, hardware efficiency and usage of computer systems becomes essential. With an average computer rack consuming between 7 and 25 kW it is essential that resources be utilized in the most optimum way possible. Currently the batch schedulers employed to manage these multi-user multi-application environments are nothing more than match making and service level agreement (SLA) enforcing tools. These management systems rely on user prescribed parameters that can lead to over or under booking of compute resources. System administrators strive to get maximum “usage efficiency” from the systems by manual fine-tuning and restricting queues. Existing mouldable scheduling strategies utilize scalability characteristics, which are inherently 2dimensional and cannot provide predictable scheduling information. In this paper we have considered existing benchmarking schemes and tools, schedulers and scheduling strategies, and elastic computational environments. We are proposing a novel job management system that will extract performance characteristics of an application, with an associated dataset and workload, to devise optimal resource allocations and scheduling decisions. As we move towards an era where on-demand computing becomes the fifth utility, the end product from this research will cope with elastic computational environments.

Advancing Research Infrastructure Using OpenStack

Ibad Kureshi, Carl Pulley, John Brennan, Violeta Holmes, Stephen Bonner, Yvonne James
Journal Paper International Journal of Advanced Computer Science and Applications, 3 (4). pp. 64-70. ISSN 2158-107X

Abstract

Cloud computing, which evolved from grid computing, virtualisation and automation, has a potential to deliver a variety of services to the end user via the Internet. Using the Web to deliver Infrastructure, Software and Platform as a Service (SaaS/PaaS) has benefits of reducing the cost of investment in internal resources of an organisation. It also provides greater flexibility and scalability in the utilisation of the resources. There are different cloud deployment models - public, private, community and hybrid clouds. This paper presents the results of research and development work in deploying a private cloud using OpenStack at the University of Huddersfield, UK, integrated into the University campus Grid QGG. The aim of our research is to use a private cloud to improve the High Performance Computing (HPC) research infrastructure. This will lead to a flexible and scalable resource for research, teaching and assessment. As a result of our work we have deployed private QGG-cloud and devised a decision matrix and mechanisms required to expand HPC clusters into the cloud maximising the resource utilisation efficiency of the cloud. As part of teaching and assessment of computing courses an Automated Formative Assessment (AFA) system was implemented in the QGG-Cloud. The system utilises the cloud’s flexibility and scalability to assign and reconfigure required resources for different tasks in the AFA. Furthermore, the throughput characteristics of assessment workflows were investigated and analysed so that the requirements for cloud-based provisioning can be adequately made.

Huddersfield University Campus Grid: QGG of OSCAR Clusters

Violeta Holmes, Ibad Kureshi
Journal Paper Journal of Physics: conference series, 256 (1). 012022. ISSN 1742-6596

Abstract

In the last decade Grid Computing Technology, an innovative extension of distributed computing, is becoming an enabler for computing resource sharing among the participants in "Virtual Organisations" (VO) [1]. Although there exist enormous research efforts on grid-based collaboration technologies, most of them are concentrated on large research and business institutions. In this paper we are considering the adoption of Grid Computing Technology in a VO of small to medium Further Education (FE) and Higher-Education (HE) institutions. We will concentrate on the resource sharing among the campuses of The University of Huddersfield in Yorkshire and colleges in Lancashire, UK, enabled by the Grid. Within this context, it is important to focus on standards that support resource and information sharing, toolkits and middleware solutions that would promote Grid adoption among the FE/HE institutions in the Virtual HE organisation.

Using OpenStack to improve student experience in an H.E. environment

Stephen Bonner, Carl Pulley, Ibad Kureshi, Violeta Holmes, John Brennan, Yvonne James
Conference Papers Proceedings of 2013 Science and Information Conference. SAI 2013 . The Science and Information Organization, London, UK, pp. 888-893. ISBN 9780989319300

Abstract

The cloud computing paradigm promises to deliver hardware to the end user at a low cost with an easy to use interface via the internet. This paper outlines an effort at the University of Huddersfield to deploy a private Infrastructure as a Service cloud to enhance the student learning experiance. The paper covers the deployment methods and configurations for OpenStack along with the security provisions that were taken to deliver computer hardware. The rationale behind the provisions of virtual hardware and OS configurations have been defined in great detail supported by examples. This paper also covers how the resource has been used within the taught courses as a Virtual Laboratory, and in the research projects. A second use case of the cloud for Automated Formative Assesment (AFA) by using JClouds and Chef for Continuous Integration is presented. The AFA deployment is an example of a Software as a Service offering that has been added on to the IaaS cloud. This development has led to an increase in freedom for the student.

Scaling Campus Grids: Implementing a modified ontology based EMI-WMS on Campus Grids

John Brennan, Marcin Paprzycki, Violeta Holmes, Maria Ganzha, Ibad Kureshi, Michal Drozdowicz, Katarzyna Wasielewska
Conference Papers The EGI Community Forum 2013, 8-12 April 2013, Manchester, UK.

Abstract

In an effort to deliver HPC services to the research community at the University of Huddersfield, many grid middle wares have been deployed in parallel to asses their effectiveness and efficiency along with their user friendliness. With a disparate community of researchers spanning but not limited to, 3D Art designers, Architects, Biologists, Chemists, Computer scientists, Criminologists, Engineers (Electrical and Mechanical) and Physicists, no single solution works well. As HPC is delivered as a centralised service, an ideal solution would be one that meets a majority of the needs, most of the time. The scenario is further complicated by the fact that the HPC service delivered at the University of Huddersfield comprises of several small high performance clusters, a high throughput computing service, several storage resources and a shared HPC services hosted off-site.

Combining AiG Agents with Unicore grid for improvement of user support.

Kamil Łysik, Katarzyna Wasielewska, Marcin Paprzycki, Michał Drozdowicz, Maria Ganzha, John Brennan, Violeta Holmes,Ibad Kureshi
Conference Papers 2013 First International Symposium on Computing and Networking, December 4-6, 2013 , Matsuyama, Japan.

Abstract

Grid computing has, in recent history, become an invaluable tool for scientific research. As grid middleware has matured, considerations have extended beyond the core functionality, towards greater usability. The aim of this paper is to consider how resources that are available to the users across the Queensgate Grid (QGG) at the University of Huddersfield (UoH), could be accessed with the help of an ontology-driven interface.

The interface is a part of the Agent in Grid (AiG) project underdevelopment at the Systems Research Institute Polish Academy of Sciences (SRIPAS). It is to be customized and integrated with the UoH computing environment. The overarching goal is to help users of the grid infrastructure. The secondary goals are: (i) to improve the performance of the system, and (ii) to equalize the distribution of work among resources. Results presented in this paper include the new ontology that is being developed for the grid at the UoH, and the description of issues encountered during the development of a scenario when user searches for an appropriate resource within the Unicore grid middleware and submits job to be executed on such resource

Hybrid Computer Cluster with High Flexibility.

Shuo Liang, Violeta Holmes,Ibad Kureshi
Conference Papers IEEE Cluster 2012, 24-28 September 2012, Beijing, China.

Abstract

In this paper we present a cluster middleware, designed to implement a Linux-Windows Hybrid HPC Cluster, which not only holds the characteristics of both operating system, but also accepts and schedules jobs in both environments. Beowulf Clusters have become an economical and practical choice for small-and-medium-sized institutions to provide High Performance Computing (HPC)resources. The HPC resources are required for running simulations, image rendering and other calculations, and to support the software requiring a specific operating system. To support the software, smallscale computer clusters would have to be divided in two or more clusters if they are to run on a single operating system. The x86 virtualisation technology would help running multiple operating systems on one computer, but only with the latest hardware which many legacy Beowulf clusters do not have. To aid the institutions, who rely on legacy nonvirtualisation- supported facilities rather than high-end HPC resources, we have developed and deployed a bi-stable hybrid system built around Linux CentOS 5.5 with the improved OSCAR middleware; and Windows Server 2008 and Windows HPC 2008 R2. This hybrid cluster is utilised as part of the University of Huddersfield campus grid.

Implementing a Condor pool using a Green-IT policy

David Gubb, Violeta Holmes,Ibad Kureshi, Shuo Liang, Yvonne James
Conference Papers Digital Research 2012, 10-12 September 2012, Oxford, UK.

Abstract

High Throughput Computing (HTC) systems are designed to utilise available resources on a network of idle machines in an institution or organization by cycle stealing. It provides an additional ‘free’ resource from the existing computing and networking infrastructure for modelling and simulation requiring a large number of small jobs, such as applications from biology, chemistry, physics, and digital signal processing. At the University of Huddersfield, there are thousands of idle laboratory machines that could be used to run serial/parallel jobs by cycle stealing. Our HTC system, implemented in Condor [1], is part of the Queensgate Campus Grid (QGG) [2] that consists of a number of dedicated departmental and university computer clusters.

Condor is an excellent HTC tool that excels in cycle stealing and job scheduling on idle machines. However, only idle powered machines can be used from a networked pool. Many organizations deploy power saving mechanisms to try to reduce energy consumption in their systems, and power down idle resources, using rigid and inflexible power management policies. The University of Huddersfield Computing Services use the Energy Star EZ GPO power saving tool that runs as a Windows service and detects how long the computer has been idle. Then it allows the computer to first turn off the screen and then go into hibernation.

Our research and development work is focused on implementing a HTC system using Condor to work within a “green IT” policy of a higher education institutions that conform to green IT challenges for a multi-platform, multi-discipline user/ resource base. This system will allow Condor to turn on machines that may have gone to sleep due to lack of usage when there is a large queue of pending jobs. The decision to utilise dormant resources will be made on a variety of factors such as job priority, job requirements, user priority, time of day, flocking options, queue conditions etc. Good practice scheduling policies would need to be devised that would work within this “green IT” pool.

Hybrid HPC – Establishing a Bi-Stable Dual Boot Cluster for Linux with OSCAR middleware and Windows HPC 2008 R2

Ibad Kureshi, Shuo Liang, Violeta Holmes
Conference Papers UK eScience All-Hands Meeting, 13-16 September 2010, Cardiff, Wales.

Abstract

The advent of open source software leading to Beowulf clusters has enabled small to medium sized Higher and Further education institutions to remove the “computational power” factor from research ventures. In an effort to catch up with leading Universities in the realm of research, many Universities are investing in small departmental HPC clusters to help with simulations, renders and calculations. These small HE/FE institutions have in the past benefited from cheaper software and operating system licenses. This raises the question as to which platform Linux of Windows should be implemented on the cluster. As the smaller/medium Universities move into research, many Linux based applications and code better suit their research needs, but the teaching base still keeps the department tied to Windows based applications. In such institutions, where it is usually recycled machines that are linked to form the clusters, it is not often feasible to setup more than one cluster.

This paper will propose a method to implement a Linux-Windows Hybrid HPC Cluster that seamlessly and automatically accepts and schedules jobs in both domains. Using Linux CentOS 5.4 with OSCAR 5.2 beta 2 middleware with Windows Server 2008 and Windows HPC 2008 R2 (beta) a bi-stable hybrid system has been deployed at the University of Huddersfield. This hybrid cluster is known as the Queensgate Cluster. We will also examine innovative solutions and practices that are currently being followed in the academic world as well as those that have been recommended by Microsoft® Corp.

Establishing a University Grid for HPC Applications

Ibad Kureshi
Thesis Masters Thesis, University of Huddersfield 2011

Abstract

This thesis documents a project undertaken at the University of Huddersfield between October 2009 and August 2010 to setup a High Performance Computing (HPC) resource, which would serve the University’s research community by providing a robust computing solution. This thesis will look at all the various kinds of requirements different fields have, with regard to a computing solution, and the tools available to meet these specific needs. This report serves as a manual for any small to medium sized institution that considers setting up a local HPC resource. It looks at all considerations regarding hardware, software, licensing, infrastructure, HR etc for setting up a centralised computing resource with sustainability and robustness being the central aim of the proposed resource. The possibilities of cross-continent and cross-institution collaboration using Clusters and Grid technologies are explored and the method for connecting to the UK eScience community through the NGS is explained.

Creating an HE ICT Infrastructure Fit for the 21st Century

Violeta Holmes, Ibad Kureshi
Presentations Higher Education Show 2013, 25 April 2013, London, UK.

Abstract

.

Design of a new network infrastructure using RPC for the University of Huddersfield campus grid.

Yvonne James, Violeta Holmes, Ibad Kureshi, David Gubb, Shuo Liang
Presentations Proceedings of The Queen’s Diamond Jubilee Computing and Engineering Annual Researchers’ Conference 2012: CEARC’12. University of Huddersfield, Huddersfield, ISBN 978-1-86218-106-9

Abstract

.

High performance distributed computing resources to enable e-science research.

Ibad Kureshi, Violeta Holmes, Shuo Liang, David Gubb, David Cooke
Presentations Proceedings of The Queen’s Diamond Jubilee Computing and Engineering Annual Researchers’ Conference 2012: CEARC’12. University of Huddersfield, Huddersfield, ISBN 978-1-86218-106-9

Abstract

.

Implementing a Condor pool using a Green-IT policy.

David Gubb, Violeta Holmes, Ibad Kureshi, Yvonne James
Presentations Proceedings of The Queen’s Diamond Jubilee Computing and Engineering Annual Researchers’ Conference 2012: CEARC’12. University of Huddersfield, Huddersfield, ISBN 978-1-86218-106-9

Abstract

.

Optimising multi-user and multi-application HPC system utilisation using effective queue management.

Ibad Kureshi, Violeta Holmes, Shuo Liang, David Cooke
Presentations Proceedings of The Queen’s Diamond Jubilee Computing and Engineering Annual Researchers’ Conference 2012: CEARC’12. University of Huddersfield, Huddersfield, ISBN 978-1-86218-106-9

Abstract

.

Providing IaaS using a private cloud in an HE environment.

Ibad Kureshi, Violeta Holmes, David Gubb, Shuo Liang, Stephen Bonner, David Cooke
Presentations Proceedings of The Queen’s Diamond Jubilee Computing and Engineering Annual Researchers’ Conference 2012: CEARC’12. University of Huddersfield, Huddersfield, ISBN 978-1-86218-106-9

Abstract

.

Robust mouldable intelligent scheduling using application benchmarking for elastic environments.

Ibad Kureshi, Violeta Holmes, David Cooke, Robert Allan, Shuo Liang, David Gubb
Presentations Proceedings of The Queen’s Diamond Jubilee Computing and Engineering Annual Researchers’ Conference 2012: CEARC’12. University of Huddersfield, Huddersfield, ISBN 978-1-86218-106-9

Abstract

.

Currrent Teaching

  • Present 2016

    COMP3381: SOFTWARE, SYSTEMS AND APPLICATIONS III

    Undergraduate
    Durham University (Computing)

    In this module students critically evaluate the development of software solutions across existing and emerging technology areas. Divided into four parts I teach the cloud computing technology area. Through the use of relevant case studies. students understand and apply fundamental principles of applied system solutions to a range of real world problems. Specifically in Cloud Computing, the course covers applications, challenges and demands that drive developments, architectures along with the service models and current technologies.

  • Present 2014

    Code First Girls: Level 1 (HTML/CSS)

    Open to All
    Durham City/Durham University

    This beginer session provides an introduction to front end web development covering the basics of HTML/CSS and jQuery. You will learn how to design the layout and format of a webpage as well as how to code collaboratively using Github.

Teaching History

  • 2014 2010

    Parallel Computer Architectures: Clusters and Grids

    Undergraduate
    University of Huddersfield (Engineering)

    In this module students introduced to Computer Cluster, Cloud and Grid technologies and applications. Semester one focuses on the fundamental components of Cluster environments, such as Commodity Components for Clusters, Network Services/Communication software, Cluster Middleware, Resource management, and Programming Environments. In semester two, students study the fundamental components of Grid environments, such as Authentication, Authorization, Resource access, and Resource discovery. The hands-on laboratory exercises provides the necessary practical experience with Cluster and Grid middleware software required to construct Cluster and Grid applications.

  • 2013 2010

    DSP Applications

    Undergraduate
    University of Huddersfield (Engineering)

    The module combines the theory of signal processing and analysis of discrete time systems, with practical aspects of Digital Signal Processing (DSP) applied to the design of digital filters. Semester one focuses on Signal processing operations and analysis in time and frequency domain; and digital FIR and IIR filter design and simulation using MATLAB. In semester two students implement their digital filter design using DSP software and hardware development system. A range of DSP design case studies: eg audio filters, two dimensional filters and adaptive filters will be used to illustrate typical DSP applications through practical laboratory work.

  • 2013 2010

    Virtual Instrumentation

    Postgraduate
    University of Huddersfield (Engineering)

    Virtual instruments represent a change from traditional hardware-centred instrumentation systems to software-centred systems that use the computing power, display, and connectivity capabilities of desktop computers and workstations. With virtual instruments, engineers and scientists build measurement and automation systems that suit their needs exactly (user-defined) instead of being limited by traditional fixed-function instruments (manufacturer-defined). In this module students learn fundamentals of programming in LabVIEW and acquire skills to design effective solutions to variety of instrumentation problems. The laboratory exercises provide the necessary practical experience required to design and develop computer-based systems to emulate a range of instruments.

  • 2013 2010

    Parallel Computer Architectures Computer Clusters

    Postgraduate
    University of Huddersfield (Engineering)

    Many existing and future computer-based applications impose exceptional demands on performance that traditional predominantly single-processor systems cannot offer. Large-scale computational simulations for scientific and engineering applications now routinely require highly parallel computers. In this module you will learn about Parallel Computer Architectures, Legacy and Current Parallel Computers, trends in Supercomputers and Software Issues in Parallel Computing; you will be introduced to Computer Cluster, Cloud and Grid technologies and applications. Students study the fundamental components of Cluster environments, such as Commodity Components for Clusters, Network Services/Communication software, Cluster Middleware, Resource management, and Programming Environments. The hands-on laboratory exercises provide the necessary practical experience with Cluster middleware software required to construct Cluster applications.

  • 2012 2010

    Digital Audio Signal Processing

    Undergraduate
    University of Huddersfield (Music Tech)

    The module combines the theory of signal processing and analysis of audio systems, with practical aspects of Digital Signal Processing (DSP). Students learn about digital filter design, Digital Signal Processors (DSPs) and their applications in audio systems. Semester one focuses on Signal processing operations and analysis in time and frequency domain, digital FIR and IIR filter design and simulation using MATLAB and LabVIEW. In semester students will apply their digital filter design to create artificial digital audio effects, using DSP software and hardware development system. Case studies will be used to illustrate typical audio DSP applications.

  • 2010 2010

    Control Systems

    H.N.D.
    University Centre Blackburn College

    In this module students are introduced to MATLAB and SIMULINK software to enable modelling of the dynamic response of instruments, devices and systems to different types of input - for example thermometers, dc motors, electronic filters and suspension systems. Understanding how Laplace Transforms are used to simulate processes and how they are used in the design of controllers for controlling the output from complex systems - such as positions control systems. Students design simple controllers for various processes using Proportional and Integral control and learn how to determine whether such systems are likely to become unstable. Further analysis techniques like Discrete Fourier Transforms are also taught.

At My Office

You can find me at my office located in Christopherson building, School of ECS at Durham University. The office number is E291 Institute of Advanced Research Computing. I am at my office every day from 10:00 am until 4:00 pm, but you may consider a call to fix an appointment.

On the Tweet-O-Sphere