Module Code: H9ETS
Long Title Emerging Artificial Intelligence Technologies and Sustainability
Title Emerging Artificial Intelligence Technologies and Sustainability
Module Level: LEVEL 9
EQF Level: 7
EHEA Level: Second Cycle
Credits: 5
Module Coordinator: Rejwanul Haque
Module Author: Shauni Hegarty
Departments: School of Computing
Specifications of the qualifications and experience required of staff

PhD/Master’s degree in a computing or cognate discipline. May have industry experience also. 

Learning Outcomes
On successful completion of this module the learner will be able to:
# Learning Outcome Description
LO1 Identify, synthesise, and communicate the impacts of non-green AI designs, models, algorithms, and components on humanity, society, and environment.
LO2 Critically assess AI systems in terms of parameters inherently related to sustainability.
LO3 Design and evaluate sustainable AI strategies to bridge computational and sustainability science for assessing, evaluating, designing, and deploying greener AI models and products.
LO4 Demonstrate expert knowledge on usable, trusted, reproduceable AI development.
Module Recommendations

This is prior learning (or a practical skill) that is required before enrolment on this module. While the prior learning is expressed as named NCI module(s) it also allows for learning (in another module or modules) which is equivalent to the learning specified in the named module(s).

No recommendations listed
Co-requisite Modules
No Co-requisite modules listed
Entry requirements

Applicants are required to hold a minimum of a Level 8 honours qualification (2.2 or higher) or equivalent on the National Qualifications Framework in either STEM (e.g., Information Management Systems, Information Technologies, Computer Science, Computer Engineer) or Business (e.g., Business Information Systems, Business Administration, Economics) discipline and a minimum of three years of relevant work experience in industry, ideally but not necessarily, in management. Previous numerical and computer proficiencies should be part of their work experience or formal training. Graduates from disciplines which do not have technical or mathematical problem-solving skills embedded in their programme will need to be able to demonstrate technical or mathematical problem-solving skills in addition to their level 8 programme qualifications (Certifications, Additional Qualifications, Certified Experience and Assessment Tests). All applicants for the programme must provide evidence that they have prior Mathematics and Computing module experience (e.g., via academic transcripts or recognised certification) as demonstrated in one mathematics/statistics module and one computing module or statement of purpose must specify numerical and computing work experience. 

NCI also operates a prior experiential learning policy where graduates with lower, or no formal qualifications, currently working in a relevant field, may be considered for the programme. 

Applicants must also be able to have their own laptop with the minimum required specification that will be communicated to each applicant through both the admissions and marketing departments. 


Module Content & Assessment

Indicative Content
Red and Green AI
Measures of Efficiency; Carbon emission; electricity usage; Number of learning parameters; Cost of AI development; FPO; FPO Cost of Existing Models;
Sustainability principles for AI
Energy and Policy Considerations for AI or Deep Learning; Recommendations; Need for protocols to record how much energy is used when training an AI model; Guidelines for defining performance gain; Policy and recommendations for Big Data for sustainability and sustainable future, respectively.
Towards Greener AI Technology
Better algorithm designs; Better memory management; Ways to reducing computing power in terms of consumption of energy and space. Better hardware design for efficient deep learning algorithms; empirical justification of model complexity; conceptual or practical simplification of an existing model in terms of interpretability, inference time, robustness
Artificial Intelligence for Social Good
Encouraging technology affordable by SMEs; AI technology having financial impacts as well as environment and/or societal impacts [e.g., public health, education]; AI (models) for public commitments.
Machine Learning and Decision Making for Sustainability
Bridging computational science and sustainability science; advanced learning techniques for this purpose, e.g., transfer learning (TL), knowledge distillation (TL for data gaps; KD for smaller models from larger models); Pruning bigger models; quantisation for model compression
Bias Awareness and Mitigation
Model/algorithmic bias; Bias from training data, e.g., gender bias; algorithms for debiasing models; Fairness of AI; identifying and preventing discriminatory action of AI;
Refocusing on common sense
Emphasizing on societal and environmental impacts of AI; safety (self-driving cars); threats of AI to the future of humanity (autonomous weapons; weapons of destruction);
Encouraging alternative areas of AI research; empowering marginalized research areas and researchers
Empower marginalized researchers (e.g., engaging women and minority research groups) and areas;
Emerging AI technology and sustainability I
AI for sustainability; Environmental sustainability; AI for climate change; AI, smart and sustainable cities; Modelling Commuting Patterns; Vision-Based Road Traffic Congestion Monitoring; Disease Surveillance and Diagnosis (e.g., imaging, genomics, electronic health records, drug discovery); AI for reducing food waste.
Emerging AI technology and sustainability II
Intelligence Gathering, compensating for a lack of human experts; Crop disease monitoring; Identifying drought and agricultural trends in every locality; Prediction of food insecurity from remote sensing data; Cropland disappearance; Sustainable agriculture, wildlife, and agro-engineering
Forecasting, prevention, and mitigation Malicious Use of AI
Malicious Use of AI-Based Deepfake Technology; Malicious chatbots; Forecasting, prevention, and mitigation Malicious Use of AI
Facilitating Usable AI and Trusted AI
Facilitating usable AI development; facilitating trusted AI development; Emphasize reproducibility.
Assessment Breakdown%
End of Module Assessment50.00%


Full Time

Assessment Type: Formative Assessment % of total: Non-Marked
Assessment Date: n/a Outcome addressed: 1,2,3,4
Non-Marked: Yes
Assessment Description:
Formative assessment will be provided on the in-class individual or group activities. Feedback will be provided in written or oral format, or on-line through Moodle. In addition, in class discussions will be undertaken as part of the practical approach to learning.
Assessment Type: Continuous Assessment % of total: 50
Assessment Date: n/a Outcome addressed: 1,2,3
Non-Marked: No
Assessment Description:
Continuous assessments will be conducted and completed during the course module. This will assess learners’ knowledge and competences on topics (e.g., identifying, assessing, and evaluating non-green AI designs, models, algorithms, and components) covered until so far. Learners will propose and execute a project on sustainability strategies for AI. The final submission will consist of a written report that demonstrates designing of sustainable strategies for an AI system (of the chosen topic of interest) and performance evaluation of the AI system(s) in terms of variety of parameters related to sustainability.
End of Module Assessment
Assessment Type: Terminal Exam % of total: 50
Assessment Date: End-of-Semester Outcome addressed: 1,2,3,4
Non-Marked: No
Assessment Description:
The examination will be a minimum of three hours in duration and may include a mix of: short answer questions, vignettes, essay-based questions and case study‐based questions requiring the application of core module competencies. Marks will be awarded based on clarity, appropriate structure, relevant examples, depth of topic knowledge, and evidence of outside core text reading.
No Workplace Assessment
Reassessment Requirement
Repeat examination
Reassessment of this module will consist of a repeat examination. It is possible that there will also be a requirement to be reassessed in a coursework element.

NCIRL reserves the right to alter the nature and timings of assessment


Module Workload

Module Target Workload Hours 0 Hours

Module Resources

Recommended Book Resources
  • Tim Frick. (2016), Designing for Sustainability, O'Reilly Media, p.250, [ISBN: 978-1491935774].
  • Peter Dauvergne. (2020), AI in the Wild, MIT Press, p.272, [ISBN: 978-0262539333].
Supplementary Article/Paper Resources
  • Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O.. (2019), Green AI,
  • Strubell, E., Ganesh, A., & McCallum, A.. (2019), Energy and policy considerations for deep learning in NLP. arXiv:1906.02243,
  • Cobert, C. J.. (2019), (2017) How sustainable is big data? Production and Operations Management, 27(9), 1685-1695. DOI: 10.1111/poms.12837.
  • Martineau, L.. (2020), Shrinking deep learning’s carbon footprint.,
  • Hager, G. D., Drobnis, A., Fang, F., Ghani, R., Greenwald, A., Lyons, T., Parkes, D. C., Schultz, J., Saria, S., Smith, S. F., & Tambe, M.. (2019), Artificial intelligence for social good. arXiv:1901.05406 [cs].,
  • Liu, L., Silva, E. A., Wu, C., & Wang, H.. (2017), A machine learning-based method for the large-scale evaluation of the qualities of the urban environment. 65, 113-125. DOI: 10.1016/j.compenvurbsys.2017.06.003.
  • Holstein, K., Vaughan, J. W., Daumé, H., Dudik, M., & Wallach, H.. (2019), Improving fairness in machine learning systems: What do industry practitioners need? arXiv:1812.05239 [cs].,
  • Nishant, R., Kennedy, M. & Corbett, J.. (2020), Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda., International Journal of Information Management, 53, 102104. DOI: 10.1016/j.ijinfomgt.2020.102104.
  • Quinn, J., Frias-Martinez, V., & Subramanian, L.. (2014), Computational sustainability and artificial intelligence in the developing world., AI Magazine, 35(3), 36-47. DOI: 10.1609/aimag.v35i3.2529.
  • Westerlund, M.. (2019), The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 40-53. DOI: 10.22215/timreview/1282.
  • Pantserev, K.A.. (2020), The malicious use of AI-based deepfake technology as the new threat to psychological security and political stability. In: Jahankhani H., Kendzierskyj S., Chelvachandran N., Ibarra J. (eds) Cyber Defence in the Age of AI, Smart Soc.
  • Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff,. (2018), The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv:1802.07228 [cs].,
  • Gao, J., Wang, W., Zhang, M., Chen, G., Jagadish, H. V., Li, G., Ng, T. K., Ooi, B. C., Wang, S., Zhou, J.. PANDA: Facilitating usable AI development. arXiv:1804.09997 [cs].,
  • Talwalkar, A.. (2020), AI in the 2020s must get greener—and here’s how.,
This module does not have any other resources
Discussion Note: