(Volume: 3, Issue: 2)
ANN Aids Droplet Motion- Based Stiffness Characterization Of Thin Membranes
The analysis on droplet motion, implying the controlled deposition, manipulation and movement of water, oil or any suitable liquid over any substrate, has recently gained interest among the research community. In fact, the analysis on the statistics as well as the dynamics of a moving droplet could aid in developing substrates, processes or tools to achieve the following: (i) A uniformly-dyed textile in the textile industry, (ii) Perfect coating or printing in painting/ printing/ packaging industries, (iii) High throughput, sensitive assays in biomedical Lab-on-a-chip microfluidic systems, (iv) Improved thermal performance in heat exchangers or cooling systems, (v) Better systems and materials for environmental monitoring (vi) Novel materials of higher mechanical strength and many more. In the past, different researchers have investigated the effects of droplet motion over rigid or gel substrates to know their stiffness or hydrophobicity for use in different applications. Rohit, Syed Ahsan Haider and Abhishek Raj, three mechanical engineers from the Indian Institute of Technology, Patna, too have characterized the stiffness of thin membranes using droplet motion. However, their approach was completely different from the past research on membrane stiffness characterization in two ways, as per their article being published in Acta Mechanica, Springer, vol. 235. The first of the differences is that they have examined the droplet dynamics in a free-hanging, inclined, thin compliant membrane of Polydimethylsiloxane (PDMS), rather than investigating non-inclined, rigid or gel membranes. The second main difference is that they have modelled an Artificial Neural Network (ANN) to predict the Young’s Modulus of the PDMS membrane, which took inputs such as the droplet's displacement over a fixed time interval, the angle of inclination, the membrane’s thickness, the deflection of the PDMS membrane and the volume (𝑉) of the droplet, as acquired from the droplet motion experiment. “The proposed method is simple, low-cost and affordable which can develop a whole new technology for stiffness measurement of thin membranes. Whole microfluidics research community working with thin membranes for various sensor applications may get immensely benefitted with the proposed technique”, the researchers say, while attaining an accuracy for stiffness characterization beyond 0.99. Image courtesy: www.shielding-solutions.com
Fog-based Wildlife Monitoring Systems Avoid Human-Tiger Conflicts
Avoiding human-tiger conflict has equal concern as that of protecting endangered tigers!!! In fact, it is very shocking that the tiger attacks have taken 103 human lives in just a year, as per an official data from the Ministry of Environment, Forest and Climate Change, Government of India, dated 07.12.2023. Since the increasing population and the rapid urbanization shrink the tiger’s habitat and livelihood, the stated death toll will only grow and not dwindle down. Knowing the adverse consequences, the Indian government has launched a scheme, known as ‘Project tiger’, for wildlife and its habitat management. Though the scheme allows the construction of barriers like, cactus- based bio-fencing, barbed wire fencing or solar-powered electric fencing to stop tigers from entering into human residencies, there are also shortcomings. Causing injuring or even death to the invading endangered tigers and no timely alarm to humans for safeguarding themselves are to name a few. This is where the Camera Trap Technology (CTT) could support. The reason is that the CTT captures the images of a moving animal or a human being using its motion-based sensors. Cloud-based systems play a mightier role in image storing and processing of images from CTT to generate an alarm and thus, provides improved security and prevents loss of life of a tiger, a human or any animal within the human locality. However, Manash Kumar Mondal, Riman Mandal, Sourav Banerjee, Monali Sanyal, Uttam Ghosh and Utpal Biswas have suggested that the fog-based wildlife monitoring systems could be more advantageous than cloud-based systems, owing to certain reasons. The first of them is the improved timeliness and quick response provided by the fog-based wildlife monitoring systems. The second reason is that fog-based systems does not render a pay-as-you-use service, but rather involves a one-time investment for building the necessary infrastructures and they best-suit for long-time as well as cost-efficient wildlife monitoring. In an IEEE conference, held at Orlando, Florida, USA in April 2023, the researchers have put forth their fog-assisted tiger alarming framework, encompassing motion capture module, tiger detection module and alert control module. As per the researchers, their system was a simulation using iFogSim simulator with efficiency-enhancement over the cloud systems, but not physically-implemented because of limited resources and massive establishment costs. Thus, the future researchers are directed to prevent the attacks of the endangered tigers in the human locality by contributing towards time-sensitive, cost-efficient, physically-implemented monitoring systems of improved data transfer security. Image courtesy: www.vecteezy.com
Energy-Efficient Fault Tolerance In WSNs Mandated
Wireless Sensor Nodes (WSNs) are widely known to support various applications in healthcare, industry, agriculture, military, disaster prediction, habitat monitoring and many more. However, the occurrence of faults is more prevalent in WSNs for numerous reasons. A few of them include the failures in sensor nodes or their communication links because of manufacturing defects or physical damage, interference or signal attenuation and limited battery power or its inefficient management, network congestion, software errors or malicious attacks and so on. Fault tolerance in WSNs is usually achieved with various approaches like, resource redundancy, error detection and recovery, energy-efficient routing, topology control and utilization of distributed algorithms, so that a faulty sensor node can be bypassed for efficient communication. Clustering, by which the sensors in the network are grouped or clustered and a cluster head is selected to manage the clustered network of nodes, has been deemed to be an energy-efficient fault tolerance approach in WSNs in the past. Hence, Hitesh Mohapatra and Amiya Kumar Rath, researchers from Veer Surendra Sai University of Technology, Burla have suggested a variant of Low Energy Adaptive Clustering Hierarchy (LEACH) algorithm, for handling faults occurring in WSNs. In their article in IET Wireless Sensor Systems, they have confirmed that their approach, termed as the Partition-Based Energy-Efficient–LEACH (PE-LEACH), better avoids multi-hop communication between the cluster head and the base stations. Not only that, they state that their protocol enables direct information transfer between the sensor nodes, which are adjacent to the base stations. Since the WSNs master any smart infrastructure with the integration of Internet of Things (IoT) in the present as well as the future, fault tolerance and energy efficiency with improved network security and information flow turns out to be a major promising theme of research for the upcoming researchers.
Image courtesy: www.vecteezy.com
Integrating Autonomous Agents And Large Language Models For Sentient AI
Can human cognition be entirely mimicked with Artificial Intelligence (AI) systems? It is this notion on which the sentient AI is aimed at. Sentient AI are generative AI systems, which try to imitate the human ability to perceive the surroundings with heightened consciousness, awareness and emotions and to react subjectively, possibly with no human interventions. However, it is very hardly challenging to achieve a sentient AI, as it requires right choices on the perception module, the cognitive core, the module for self-awareness, identity checking and emotional intelligence, communication as well as Natural Language Processing (NLP) modules and the algorithms for learning and adaptation. With the advent of Autonomous agents and Large Language Models (LLMs) like, the OpenAI's GPT (Generative Pre-trained Transformer) models and Google's BERT (Bidirectional Encoder Representations from Transformers), the research on synthesizing sentience in AI systems has geared up in the past few years. Jeremiah Ratican, James Hutson and Daniel Plate from Lindenwood University, USA, too have integrated LLMs and Autonomous agents for synthesizing sentience, as an effort to emulate the human cognitive complexity. The researchers believe that the sentient AI could be achieved, if the modular mind theory of the brain could be impersonated. Naturally, there are scientific evidences that each of the human cognitive processes are managed by distinct compartments or modules in the brain. So, the researchers claim in Journal of Artificial Intelligence, Machine Learning and Data Science, that an AI system could synthesize sentience, if it is integrated with a plurality of autonomous agents being informed with LLMs, each dealing with a specific human cognitive functionality and acting as distinct modules as in brain. Sentient AI finds a myriad of applications in healthcare, education, military, companionship and emotional support for the aged or the physically-challenged, and it can revolutionize artificial intelligence and neuroscience with improved decision support and task automation in the near future. So, the impending researchers have a great way ahead, but with an important concern- “Develop and device AI systems that aid human, not the systems questioning human rights, values or ethics”
Image courtesy: www.freepik.com
Achieve Compressed DCNN models with History- Based Filter Pruning
Nowadays, Deep Convolutional Neural Networks (DCNNs) have been found to master almost all the computer vision tasks. A small number of its applications include: (i) Processing, segmenting or classifying an image for object detection/localization in medical, remote sensing and industrial applications, (ii) Speech recognition, Natural Language Processing and Sentiment Analysis, (iii) Anomaly detection and network security and (iv) Artistic content generation. However, the DCNNs being known for their layered convolution operations also impose computational complexity and demand large memory overhead, denying its usage in resource-constrained devices. Filter pruning is one among the effective methods that compress the deep CNN models to make a rapid and resource-efficient inference on the task under study, without compromising the accuracy. By filter pruning, the redundant or unimportant filters are removed based on a pruning criterion like, activation, magnitude or norm. Subsequently, the DCNN is retrained with the pruned filters to combat for any accuracy loss. Hence, iterative pruning, followed by fine-tuning of DCNNs, helps to achieve compressed deep models with desired performance. However, the removal of consistently- redundant filters across the entire network training is not always assured by the conventional pruning techniques. Five Indian researchers, S.H. Shabbeer Basha, Mohammad Farazuddin, Viswanath Pulabaigari, Shiv Ram Dubey and Snehasis Mukherjee have pinpointed this issue in their article in Neurocomputing, Elsevier, vol. 573. As per the researchers, if the network training history is incorporated for pruning the filters, the floating-point operations could be reduced, even in popular DCNN architectures like, LeNet-5, VGG-16, ResNet-56, ResNet-110 and ResNet-50. Their additional optimization step that is employed after pruning, also reduced the error rates. “One possible direction of future research is pruning the filters further by considering the similarity among the filters from different layers”, the researchers say, while ensuring their approach to be robust for both the low as well as the high-resolution images. Since resource-constrained devices, especially the mobile devices, will be of massive utilization in the approaching era of Internet of Things (IoT), cloud computing and artificial intelligence, approaches to achieve compressed models of DCNNs are most-necessitated. Image courtesy: www.freepik.com