Artificial intelligence (AI) is an interdisciplinary field of computer science that focuses on the development of intelligent machines capable of performing tasks that typically require human intelligence, such as learning, reasoning, problem-solving, decision-making, perception, and natural language processing. AI algorithms enable machines to learn from data, recognize patterns, and make predictions or decisions based on the available information. The field of AI has seen significant growth and development in recent years, with a wide range of applications across industries and domains.

Techniques used in AI

One of the key techniques used in AI is machine learning, which involves the use of algorithms that can learn from data without being explicitly programmed. Machine learning algorithms can be trained on large datasets to identify patterns and relationships between different variables, and to make predictions or decisions based on the learned patterns. Supervised learning, unsupervised learning, and reinforcement learning are three common types of machine learning techniques, each of which has its own strengths and limitations (Alpaydin, 2010).

Another important technique used in AI is deep learning, which is a subset of machine learning that involves the use of neural networks. Neural networks are composed of interconnected nodes that can learn from data through a process called backpropagation. Deep learning algorithms can be trained on very large datasets to perform tasks such as image recognition, speech recognition, and natural language processing (Goodfellow et al., 2016).

Applications of AI

AI has a wide range of applications across industries and domains.

In healthcare, AI is being used to analyze patient data and medical images to help doctors make more accurate diagnoses and develop more effective treatment plans (Ravi et al., 2017).

In finance, AI is being used to detect fraud, optimize investment portfolios, and develop trading algorithms (Deng, 2018).

In transportation, AI is being used to improve traffic flow and reduce accidents through the use of autonomous vehicles and intelligent transportation systems (Gartner, 2021).

Concerns re: AI Impacts

Despite the significant potential benefits of AI, there are also concerns around the impact it may have on jobs, privacy, and security.

As machines become increasingly capable of performing tasks that were previously done by humans, there is a risk of significant job displacement in certain industries (Brynjolfsson & Mitchell, 2017).

There are also concerns around the potential for AI systems to perpetuate biases and discrimination, particularly in areas such as criminal justice and hiring (Buolamwini & Gebru, 2018; Mittelstadt et al., 2019).

Additionally, the use of AI raises important questions around privacy and security, particularly around the collection and use of personal data (Office of the Australian Information Commissioner, 2021).

To address these concerns, it is important for organisations and policymakers to consider the ethical and social implications of AI, and to develop appropriate regulations and guidelines for its development and use. This includes ensuring transparency and accountability in AI algorithms, regularly reviewing and testing them for potential biases, and implementing appropriate privacy and security measures (Floridi et al., 2018).

Furthermore, it is important to engage in public discourse and education around AI to increase awareness and understanding of the ethical and social implications of this technology (Yoo et al., 2018).

AI is a rapidly evolving technology with significant potential to transform the way we live and work. Machine learning and deep learning techniques enable machines to learn from data and perform tasks that were previously thought to be the sole domain of human intelligence.

While AI has a wide range of applications across industries and domains, there are also concerns around the potential impact it may have on jobs, privacy, and security.

By considering the ethical and social implications of AI and developing appropriate regulations and guidelines, we can help to ensure that this technology is used in a responsible and ethical manner, for the benefit of individuals and society as a whole.

Furthermore, there is a need to address the skills gap in AI development and use, which is an important factor in ensuring that AI is developed and used in a responsible and ethical manner. There is currently a shortage of skilled AI professionals in many countries, including Australia (Australian Computer Society, 2019).

To address this, there is a need to invest in education and training programs that can equip individuals with the necessary skills to work with AI technologies (Koehler, 2019).


Moreover, there is a growing interest in the development of AI that is designed to be transparent, explainable, and ethical.

Explainable AI (XAI) is a subfield of AI that focuses on developing algorithms and models that can provide transparent and interpretable outputs (Gunning, 2017). XAI can help to address concerns around the lack of transparency and accountability in many AI systems, which can make it difficult to understand how decisions are being made.

Ethical AI, on the other hand, refers to the development and use of AI technologies in a manner that is consistent with ethical principles and values (Jobin et al., 2019). This includes ensuring that AI systems are designed to promote human well-being, respect for privacy, and the protection of human rights (Bostrom & Yudkowsky, 2014).

AI in Australia

In Australia, there are a growing number of organisations that are working on the development and use of AI technologies. The Commonwealth Scientific and Industrial Research Organisation (CSIRO), for example, is Australia’s national science agency and is involved in a range of AI-related research and development initiatives (CSIRO, n.d.). The Australian government has also developed an AI Ethics Framework, which provides a set of principles and guidelines for the ethical development and use of AI in Australia (Australian Government, 2019). Additionally, there are a number of private sector organizations in Australia that are investing in AI research and development, including Atlassian, Canva, and Atura AI, among others.


Alpaydin, E. (2010). Introduction to machine learning (2nd ed.). MIT Press.

Australian Computer Society. (2019). Australia’s digital pulse 2019. Retrieved from https://ia.acs.org.au/article/2019/australia-s-digital-pulse-2019.html

Australian Government. (2019). Australian government AI ethics framework. Retrieved from https://www.ai.gov.au/sites/default/files/2019-11/AI%20Ethics%20Framework.pdf

Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In J. M. Bishop & Y. Hong (Eds.), Ethical and social issues in the information age (pp. 231-245). Springer.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 81-91.

CSIRO. (n.d.). Artificial intelligence. Retrieved from https://www.csiro.au/en/Research/D

Deng, Y. (2018). Machine learning applications in finance: A review of contemporary literature. Applied Economics, 50(60), 6467-6489.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Luetge, C. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning (1st ed.). MIT Press.

Gartner. (2021). Top 10 strategic technology trends for 2021. Retrieved from https://www.gartner.com/smarterwithgartner/gartner-top-10-strategic-technology-trends-for-2021/

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.

Koehler, P. (2019). Artificial intelligence education and skills development in Australia. Retrieved from https://www.acs.org.au/content/dam/acs/acs-publications/ACS-AI-Education-and-Skills-Development.pdf

Mittelstadt, B. D., Russell, C., & Wachter, S. (2019). Exploring the explanatory power of AI: The role of sociotechnical context. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 203-209.

Office of the Australian Information Commissioner. (2021). AI and privacy. Retrieved from https://www.oaic.gov.au/updates/guidance-on-ai-and-privacy/

Ravi, D., Wong, C., Deligianni, F., Berthelot, M., Andreu-Perez, J., Lo, B., & Yang, G. Z. (2017). Deep learning for health informatics. IEEE Journal of Biomedical and Health Informatics, 21(1), 4-21.

Yoo, J., Kim, J., Lee, K., & Kim, J. (2018). The role of public discourse and education in AI governance. Science and Engineering Ethics, 24(6), 1923-1937.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s