Technology – Argil DX https://www.argildx.us Tue, 15 Jun 2021 11:36:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 https://www.argildx.us/wp-content/uploads/2019/05/cropped-ArgilDX-favicon-32x32.png Technology – Argil DX https://www.argildx.us 32 32 Exploring Developer Mode in Adobe Experience Manager Sites https://www.argildx.us/technology/developer-mode-adobe-experience-manager-sites/ https://www.argildx.us/technology/developer-mode-adobe-experience-manager-sites/#respond Tue, 15 Jun 2021 11:25:58 +0000 https://www.argildx.us/?p=8540 Have you noticed the developer mode in Adobe Experience Manager when you switch from edit mode to preview while authoring? Ever thought what is it for? Let’s explore what AEM Sites Developer Mode is and its uses in this article. What is Developer Mode in AEM Sites? Dev mode in AEM tells us distinctly how much time each module of the page is taking to render on the page. What are the ... Read more

The post Exploring Developer Mode in Adobe Experience Manager Sites appeared first on Argil DX.

]]>
Have you noticed the developer mode in Adobe Experience Manager when you switch from edit mode to preview while authoring? Ever thought what is it for? Let’s explore what AEM Sites Developer Mode is and its uses in this article.

AEM sites UI showing the different options in edit mode including developer mode
Fig: The Developer Mode can be found under Edit in AEM Sites.

What is Developer Mode in AEM Sites?

Dev mode in AEM tells us distinctly how much time each module of the page is taking to render on the page. What are the HTL (HTML Template Language) or JSP (JavaServer Pages) script(s) involved in rendering each component? What is location of each component in JCR (Java Content Repository)? It also has one-click button and links to edit all these. 

Sounds useful? 

Yes, these are parts of our day-to-day activities of diagnosing or updating components of pages. 


How to access the Adobe Experience Manager Developer Mode

To get the information mentioned in the above paragraph, all you have to do is just click on this developer mode under Edit and select the component you want to diagnose. 

As soon as you switch on the developer mode, the left panel will open along with time required to render that component or layout container. It will list out the component tree based on the component structure of your page. Then click on the parent component to open up and list all the child components, where we can see all the components categorized into parsys.

Please note: This time required is the server-side computational time and not the client-side rendering time i.e., the time required to resolve the script and compute the result and not the JavaScript which runs on the browser. 

For example, let’s see the Product Grid component of We.Retail home page. 

product grid component of We.Retail page within developer mode of adobe experience manager
Fig: Product grid component within developer mode of We.Retail home page.

Here, as we can see, 0.27s is the computational time of this component. 

Let’s look at each of the components in the product grid and find out what they represent.

the details section of the product grid component in developer mode of adobe experience manager
Fig: The details of the components in developer mode.

1. The first icon is for Details of the component, it will give all the scripts involved in this component.

2. The second icon opens up the CRX path of the component and the associated HTML/JSP file in the editor.

3. The third icon gives us the component details, like title, component group, creation time, etc. Along with this, it shows the list of all the pages where the same component is being used. (This could be useful in estimating the scope of regression and unit testing). 



error list in aem sites developer mode
Fig: Error list section in AEM Sites Developer Mode

Error Section in Developer Mode

Till now, if you are doing hand-on as well, you might have noticed an error section below the component list. This section shows the error produced by any component (if any).

You might have noticed sometimes we drop a component on a page and as soon as you author it or just drop it on page it vanishes. You can observe the reason for the same from here.

Few things to note about Developer Mode

1. It is only available in the touch-enabled UI (when editing pages).

2. It is not available on mobile devices or small windows on desktop (due to space restrictions).

3. It is only available to users who are members of the ‘administrators’ group.

4. It is by default enabled in author instances which are started without ‘nosamplecontent’ run mode. If it is required in Publish mode or in ‘nosampleconntent’ mode you can enabled it from Felix console configuration manager <host>:<port>/system/console/configMgr. There is a configuration with the name ‘Day CQ WCM Developer Mode Filter’ just check the checkbox in it. 

aem sites developer mode filter
Fig: AEM Sites Developer Mode filter.

Looking for an AEM development partner? Need help migrating a site to AEM? Or want a cool feature added to your AEM website?

Talk to us for all AEM services.

The post Exploring Developer Mode in Adobe Experience Manager Sites appeared first on Argil DX.

]]>
https://www.argildx.us/technology/developer-mode-adobe-experience-manager-sites/feed/ 0
Video Buffer Optimization in AEM https://www.argildx.us/technology/video-buffer-optimization-aem-seamless-video-playback-streaming/ https://www.argildx.us/technology/video-buffer-optimization-aem-seamless-video-playback-streaming/#respond Wed, 29 Apr 2020 13:00:00 +0000 https://www.argildx.us/?p=7751 Video is one of the most effective and easiest mediums of communicating information to your audience. If you’re looking to use this format of communication on your website, you would want it to work as seamlessly as possible. Video buffer optimization can help you considerably improve the video playback and streaming capabilities on your website. ... Read more

The post Video Buffer Optimization in AEM appeared first on Argil DX.

]]>
Video is one of the most effective and easiest mediums of communicating information to your audience. If you’re looking to use this format of communication on your website, you would want it to work as seamlessly as possible. Video buffer optimization can help you considerably improve the video playback and streaming capabilities on your website. Before diving into video buffer optimization and its implementation in AEM, let’s take a look at why it is required.

In the early days of the internet, web browsers played videos via embedded video players like RealPlayer and Windows Media Player. This required custom codecs and browser plugins. The trend then moved on to Flash and Quicktime, which slowed down our browsers and sometimes caused security concerns.

It took more than a decade to create <video> tag and achieve browser support for it. Today, most of the web is using <video> tag and as a result there are fewer browser crashes, less memory used, and smoother playback.

Basic <video> Tag Usage

Let’s look at an example of the use of <video> tag:

<video width="640" height="480" poster="sample-video.jpg" controls autoplay preload>
 <source src="sample-video.webm" type="video/webm">
 <source src="sample-video.ogv" type="video/ogg">
 <source src="sample-video.mp4" type="video/mp4">
</video>
Online Video Buffer

All these worked fine until we faced issues with <video> tag implementation. The function of this tag is to download all video data on page load by default. However, if the preload attribute is set to none, video will not be downloaded by the browser on page load but only when a user clicks or plays the video. So, we are just delaying the process and not allowing a seamless video playback and stream. Also, what if we want the video to be downloaded in packets or chunks and save bandwidth e.g. online video buffering.

Why do we need video buffer optimization?

The HTML5 <video> tag is not flexible enough if you need to add extra features and customizations on it. For such purposes you will need to use third-party APIs. Video.js is one such API that provides extra features and theme options on our traditional <video> tag.

If we increase a video size, then the time taken by the video to play or load will also increase. So, if we use a video of bigger size on a slow internet connection and we want to provide smoother experience to end user, we will have to optimize the video buffer process.

So, the workaround for this problem would be ‘play, pause and progress’. This means every time the progress event is triggered, we can check if the video is buffered by some percent (say 10%). It will continue buffering only “what’s needed” when the video starts playing. That means the video will never be fully buffered.

Steps for Implementing the Video Buffer Optimization

Let’s take a look at how to implement video buffer optimization in AEM to allow a more seamless video playback and stream.

Player.on("progress", function () {
   // Get Current Video seek time.
    var currentTime = Player.currentTime();
    
    if (lastTime !== currentTime) {
        lastTime = currentTime;
    } else if (myPlayer.paused() === false) {
         // (this.paused() will return true even if it's buffering, so we have to check the time advancement)
        // Video is buffering/waiting.
        buffered = false;


        bufferPause = true; // To indicate pause was initiated by this buffer check

        lastBuffer = Player.bufferedPercent(); // To get the buffer percent of video.
        // This buffer greater then 10% of the video. (0.1 = 10%)
    } else if (!buffered && (Player.bufferedPercent() - lastBuffer > 0 || Player.bufferedPercent() > 0.1)) {
        buffered = true;


        // Resume playing if an additional 10% has been buffered.
        if (bufferPause) {
            Player.play();
        }
    }
})

So, now you can get working on providing smooth video playback or streaming for your users on your AEM website. Feel free to write to us for any queries on this video buffer optimization or anything related to AEM.

Contact us for AEM Integrations and Customizations

While AEM comes packed with features and functionalities that make the life of developers and marketers easier, it can be enhanced to offer top-notch experiences to your users. Learn more about such AEM personalizations that can enhance your users’ digital experience from certified Adobe Experience Cloud experts at Argil DX.

The post Video Buffer Optimization in AEM appeared first on Argil DX.

]]>
https://www.argildx.us/technology/video-buffer-optimization-aem-seamless-video-playback-streaming/feed/ 0
AEM Welcomes the Mighty Microservices https://www.argildx.us/technology/microservices-architecture-with-osgi-framework-modularization-aem/ https://www.argildx.us/technology/microservices-architecture-with-osgi-framework-modularization-aem/#respond Thu, 05 Mar 2020 11:13:35 +0000 https://www.argildx.us/?p=7648 The idea of programmers aiming at designing applications that are modular in concept has been around the corner since the inception of application development. The concept of modularization has however evolved over decades only to deliver superior inventions. When we talk about Adobe Experience Manager (AEM), it is also equipped to serve modular applications (meaning ... Read more

The post AEM Welcomes the Mighty Microservices appeared first on Argil DX.

]]>
The idea of programmers aiming at designing applications that are modular in concept has been around the corner since the inception of application development. The concept of modularization has however evolved over decades only to deliver superior inventions.

When we talk about Adobe Experience Manager (AEM), it is also equipped to serve modular applications (meaning that it is capable of supporting a microservices architecture), thanks to OSGI (Open Source Gateway Initiative). OSGi “provides the standardized primitives that allow applications to be constructed from small, reusable and collaborative components. These components can be composed into an application and deployedThis allows easy management of bundles as they can be stopped, installed, started individually. The interdependencies are handled automatically.” (Read more) All these work well, in fact very well.

AEM based applications are often complex, consisting multiple complex bundles that individually connect to cumbersome APIs and what not. Such architectures seem quite natural and obvious in AEM-based systems — everything is deployed on a single instance and as and when you want to scale horizontally, you add another instance. Simple and it works, at least in theory.

In practice, some parts of an application are used far more extensively than others and hold far more resources. Adding an entire AEM instance to scale a particular module doesn’t make sense when the TCO for one instance is already high. In totality, traditional AEM applications end up being a monolith.

Comes microservices to the rescue. You scale a module stored on a different server. Where is AEM Server? Well, it’s still there, acting as another service.

Argil DX has developed F-AI-shion Police with the above approach.

F-AI-shion Police 

F-Ai-shion police architecture showing the use of multiple service components in AEM
Fig: High-level architecture of F-AI-shion Police

Here, we’re invoking “Object Detection Service” from UI (JavaScript) of our page component. 

The Object Detection Service in this tool reside on a server of their own. You scale according to your needs making cost-effective decisions. These architectural decisions are of great business value.

What are your opinions on microservices for AEM? Do let us know. 

The post AEM Welcomes the Mighty Microservices appeared first on Argil DX.

]]>
https://www.argildx.us/technology/microservices-architecture-with-osgi-framework-modularization-aem/feed/ 0
How to Implement Full Software Testing Life Cycle (STLC) and Apply Testing Techniques https://www.argildx.us/technology/how-to-implement-full-software-testing-life-cycle-and-apply-testing-techniques/ https://www.argildx.us/technology/how-to-implement-full-software-testing-life-cycle-and-apply-testing-techniques/#comments Tue, 18 Feb 2020 08:57:29 +0000 https://www.argildx.us/?p=7627 Software Testing trends over the decades As we can see in the above images, the trend and competency of software testing have changed over the years. QA testers are now more technical and process oriented. In these changing times, testing has grown beyond just finding bugs into a wider scope. It is generally performed right from the inception of ... Read more

The post How to Implement Full Software Testing Life Cycle (STLC) and Apply Testing Techniques appeared first on Argil DX.

]]>
Software Testing trends over the decades
conversation between product owner and developer over the importance of software testing
conversation between product owner and developer about software testing reducing bugs
product owner asking testers to work harder to improve software

As we can see in the above images, the trend and competency of software testing have changed over the years. QA testers are now more technical and process oriented. In these changing times, testing has grown beyond just finding bugs into a wider scope. It is generally performed right from the inception of the project when the requirements are not even finalized.

Since testing processes have been standardized after the year 2000, it also has its own lifecycle just like the development team. In the world of testing, we call that lifecycle as STLC which will be explained in the following section.

Let’s start! 

lines of codes on a screen
What is a lifecycle in software testing?

A lifecycle is nothing but the sequence of changes from one form to another form. These changes generally happen to any tangible or intangible things. Every entity has a lifecycle from its inception to retirement.

In a similar way, every software is also an entity. Testing also has a sequence of steps which should be executed in a definite sequence just like developing a software.

The process of executing the testing activities in a systematic and planned manner is called testing life cycle. 

What is Software Testing Life Cycle (STLC)?

STLC – Software Testing Life Cycle refers to a testing process which has certain steps to be executed in a sequence to ensure that the quality goals have been met in each phase. In STLC process, each activity or phase is carried out in a planned and systematic way. Every phase has different goals and deliverables. Different organizations might have different phases in STLC; however, the basis remains the same.

Below are the basic phases of STLC:

  1. The Requirements phase – The teams are involved in finalizing the initial draft of requirements.
  2. Planning Phase – Planning the resources and activities which will be involved in the project.
  3. Analysis phase – Depicting the test scenarios and requirement clarifications if any.
  4. The Design Phase – Designing of all the documents which will be used in the project.
  5. Implementation Phase – Test writing and its prioritization and preparation of sanity and regression suites.
  6. Execution Phase – The main phase where the actual execution takes place.
  7. Conclusion or Closure Phase – Submit the metrics and reports. UAT with clients or closure activities to ensure the exit criteria is met. 

    Let’s take a look at each phase in detail.
1. Requirement Phase: 
people huddles around a table with stationery, drinks and devices

During this phase of STLC, testing teams analyze and study the requirements. The testers usually conduct meetings to discuss the requirements with the business analysts’ team and all the stakeholders to see what is testable and what are the areas of focus. This phase of STLC helps to identify the scope of the testing. If any feature is not testable or has a dependency, then QA needs to communicate the same during this phase so that the mitigation strategy can be planned. It’s a good practice to create diagrams and flow charts to understand the requirements and the user journey with the development team. 

2. Planning Phase
a person's hands taking notes in a diary over a table representing the planning stage in software testing

In general, the test planning is supposed to be the first step of the testing process. In this phase, we identify the various activities and resources which would help to meet the testing objectives. It is in this phase where the testing team decides the reporting structure and the tools to be used like Defect Management Tool, Test Case management tools, automation tools, etc. Test plan and test strategy document are also created in this phase and how to implement this practice is explained in the 6th QA practice of this article

3. Analysis Phase
a man looking a notes pinned on a wall and analyzing

This STLC phase, in my opinion, should be the most important phase as it defines “WHAT” to be tested. We must identify the test scenarios through the requirements document, product risks and other test basis. The test scenarios should be traceable back to the requirement.

How to write efficient test scenarios are explained here

4. Design Phase
silhouette of woman experiencing VR content in front of a tiled display of a car illustration

In this phase, we define the various ways and mediums through which we will conduct the test. This phase involves the following tasks: 

  • Detailing the test scenarios. Breaking down the test scenarios into multiple sub-conditions to increase coverage.
  • Identifying and setting up the test data
  • Identifying and setting up the test environment or finalizing the devices and browser list on which the testing will be performed
  • Creating the RTM document, which is the requirement traceability metrics helping the Test scenarios and test cases to be traced back to the requirements
  • Create the test coverage metrics to depict the coverage of the functionality or module. This can be optional as the coverage can be extracted from the RTM as well. 
5. Implementation Phase
man releasing a dart at the target after aiming

This STLC phase is for creation of the detailed test cases from the test scenarios created earlier. It is a good practice to give priority to each test case which can help the test team to extract the priority items to create Sanity and Regression suite. Also, please make sure that these test cases are duly reviewed by all the stakeholders and product owners and getting their sign-off is the most important aspect of a successful project. Involve and get your clients to interact more in this phase to get your test cases and test scripts in automation reviewed and signed off. 

6. Execution Phase
the word 'now' written across an upward trending arrow and circled with red

As the name suggests, this is the phase where the actual execution takes place. Before starting with the execution, we need to make sure that our entry criteria are met. The Entry criteria needs to be finalized in the Test planning phase and should be mentioned in the Test Strategy document which should be signed off before starting this phase.

Execute > Log defects > Track your progress > Update RTM and test cases when you find bugs while doing exploratory testing.

7. Conclusion or Closure Phase
a cartoon of a human standing near a stop sign post

This is the end of one software testing life cycle which concentrates majorly on the exit criteria and reporting, and has a retrospective meeting to depict lessons learnt. The reporting structure can be decided in the Design phase and should be used diligently in this phase to communicate the progress and get the sign-off on the Exit criteria. UAT (User Acceptance Testing) should also be conducted in this phase with the product owners to get their sign-off and we also need to discuss if there are any P1 issues remaining and a fix plan for the same if any. Retrospective meetings are also an essential part of each lifecycle as it helps the whole project team in understanding “What went wrong” and “What went right” to help the team to improve from the next cycle. 

Summary of the STLC Phases

Let’s try to summarize Software Testing Life Cycle (STLC) in a tabular form: 

S.No  Phase Name  Entry Criteria  Activities Performed  Deliverables 
Requirements  Requirements specification document  
 
Application design document 
 
User acceptance criteria document 
 
Brainstorming on the requirements. Create a list of requirements and get your doubts clarified.

Understand the feasibility of the requirements – whether they are testable or not. 
 
If your project requires automation, carry out automation feasibility study.
RUD (Requirements understanding document) 
 
Testing feasibility report 
 
Automation feasibility report  
Planning  Updated requirements document 
 
Test feasibility reports 
 
Automation feasibility report 
 
Define the scope of the project
 
Do the risk analysis and prepare the risk mitigation plan 
 
Perform test estimation
 
Determine the overall testing strategy and process
 
Identify the tools and resources and check for any training needs
 
Identify the environment
 
Test Plan document
 
Risk mitigation document
 
Test estimation document
 
Analysis  Updated requirements document 
 
Test Plan document 
 
Risk Document 
 
Test estimation document 
 
Identify the detailed test conditions  Test conditions document
Design  Updated requirements document 
 
Test conditions document 
 
Detail out the test condition   
Identify the test data 
 
Create the traceability metrics 
 
Detailed test condition document 
 
Requirement traceability metrics 
 
Test coverage metrics 
 
Implementation  Detailed test condition document  Create and review the test cases

Create and review the automation scripts
 
Identify the candidate test cases for regression and automation

Identify and highlight priority to extract test cases for Sanity and Regression suite
 
Identify/create the test data
 
Take sign-off of the test cases and scripts
Test cases

Test scripts
 
Test data
Execution  Test cases 
 
Test scripts 
 
Execute the test cases 
 
Log bugs/defects in case of discrepancy 
 
Report the status
Test execution report 
 
Defect report 
 
Test log and Defect log 
 
Updated requirement traceability metrics  
Conclusion  Updated test cases with results 
 
Test closure conditions 
 
Provide the accurate figures and result of testing 
 
Identify the risks which are mitigated 
Updated traceability metrics
 
Test summary report

Updated risk management report  
Closure  Test closure condition 
 
Test summary report 
 
Do the retrospective meeting and understand the lessons learnt  Lessons learnt document
 
Test matrices 
 
Test closure report 

You can also read our post on improving website performance here. Write in to Argil DX to learn more about how we practice rigorous QA testing in agile software delivery.

Argil DX is a leading implementer of AEM and other solutions in the Adobe Experience Cloud. Contact us for more on our services.

The post How to Implement Full Software Testing Life Cycle (STLC) and Apply Testing Techniques appeared first on Argil DX.

]]>
https://www.argildx.us/technology/how-to-implement-full-software-testing-life-cycle-and-apply-testing-techniques/feed/ 1
7 Best QA Testing Practices for Agile Software Delivery https://www.argildx.us/technology/best-practices-qa-testing-agile-software-delivery/ https://www.argildx.us/technology/best-practices-qa-testing-agile-software-delivery/#respond Mon, 10 Feb 2020 09:28:51 +0000 https://www.argildx.us/?p=7603 The best QA testing practices are still in demand amongst many organizations. QA teams are always looking to develop the best strategy to improve efficiency and deliver top quality products to stakeholders.  Testing teams, from mobile to software to CMS website, in various projects are adopting the Agile process. While the Agile methods have helped in improving the profile of ... Read more

The post 7 Best QA Testing Practices for Agile Software Delivery appeared first on Argil DX.

]]>
The best QA testing practices are still in demand amongst many organizations. QA teams are always looking to develop the best strategy to improve efficiency and deliver top quality products to stakeholders. 

Testing teams, from mobile to software to CMS website, in various projects are adopting the Agile process. While the Agile methods have helped in improving the profile of testers in many ways, they have also raised questions. The questions are specifically about being the most effective in short sprints to deliver a quality product. 

At Argil DX, we emphasize on client-focused delivery wherein we study the client’s business requirements and decide the best QA testing practices for agile software development. 

Let’s start with a look at some of the best QA testing practices enabling us to deliver quality software products to clients. These practices can be effective in helping your team deliver quality products. 

1. Communication
a man holding a can with a thread and talking into it for communication
No matter the mode of communication, it should be effective and enhance collaboration.

A general challenge every QA team faces is maintaining flawless communication between all the parties involved in a project, i.e. developers, management and customers. An effective communication with the help of a process to follow and maintain helps in productive collaboration. Simple examples of communication include creating a model ticket, establishing code review procedures and clearing labeling schemes. Most teams fail to make proper use of available tools to have an effective communication. This in turn raises issues like sudden changes in the code or a minor change in the requirement that wasn’t documented or mentioned in the user story. 

Tickets in Pivotal Tracker or Defect Management Tools like JIRA are fundamental part of the best QA testing practices at Argil DX — we use them all day, every day. Whether you’re describing a bug or a feature, ensure that the ticket you create is described in a detailed and easily understandable manner.

It’s the ticket creator’s job to give the exact details necessary to address the ticket – the title or summary, the body or description, the steps to reproduce the labels if required. It should clearly state the problem and/or the expected outcome. 

When creating a ticket, you can use sub-tasks if you feel that it’ll be helpful in breaking up the main task to create smaller tasks that are easier to achieve. Remember, never create an incomplete ticket. Sometimes, tickets are prepared hurriedly and on the go during a meeting or sanity testing and require more descriptions to be added later. That’s OK. However, it’s a common scenario where the description section is simply skipped over because it is assumed the title will infer the outcome. 

Every time you create a ticket ask yourself the following questions before completing. 
a. Will someone else easily understand this ticket? 
b. Did you provide the detailed information and your own insight? 
c. Could you easily give this ticket to a QA team member from another project and would he or she still be able to check it without any extra explanation? 

If you answer these questions with an emphatic yes, especially the last one, then it’s a good sign that the ticket is ready for the world.

Lastly and most importantly, always keep your tickets up to date. Always add information about changes in the project, even if they are small ones. For example, if you decide to change the content, mention it. Remember your QA pays attention to detail and after noticing the updates, they will decide if the particular changes were intended or not. If a new mockup is ready, always replace the outdated file with the latest version. 

2. Testing to meet the usability or UX standards your customers expect 

We believe in usability testing and like to find even the simplest of usability flaws in a software. However, on looking closer we find that the software was delivered by the development team without understanding the requirements. The issue is then classified as “Worked as designed.”

This is where we must communicate more with our clients and understand what the best user experience will be, by keeping in mind the type of users who will be using the software and some day-to-day activities conducted on that software/website/application.

The most common thing in all top-notch testers is their laser-like focus on the user experience. It’s far too easy for testers to get lost in the weeds of test cases and forget about the actual end user, however this is a fatal mistake.

QA graph for usability and functionality testing

We’ll never be able to catch every weird, obscure bug, but there are always some design elements where they tend to lurk. By focusing our testing efforts on some of the following areas — or at least not neglecting them — we can catch more issues before our customers do:

a) Too many fields on a single page
Your end users are probably doing multiple things at once, so if you give them more than 10-15 fields to enter before they can save, it is an issue. You can suggest using a multipage form or a method for the user to save a transaction in a temporary state. The idea is to create more user-friendly design and workflow of that module.

b) Authoring experience in CMS website
So many companies have a separate team of content authors, and they are always putting content into the website daily. In scenarios like these, the content authors become your end users. It becomes imperative for any QA team member to put themselves into their shoes while testing a component and suggesting a change in the authoring experience. The authoring of any page or component should be easy to understand and the most important fields or options to be highlighted first and then user should move to the less important authoring fields.

c) User journey in a software, website or application
This is a very important area of focus while testing any product as this is the foundation on which user experience or usability was introduced in the world of QA. There are many researches and reports of applications with good concept and idea failing in the market because of bad user experience caused by an unsatisfactory user journey. The user journey should be extremely simple and easy to understand with desired results displayed at every step of the way. Otherwise, the user gets exhausted because they are just selecting options and nothing relevant to their search is displayed.

metal building structural framework
A proper testing framework enables swift agile software delivery.
3. Build an exciting framework of testing approach: 

QA team can come up with new testing strategy in the agile world of short sprints to deliver the quality product. Some of the testing approaches can be:

a) QA team should develop a healthy relationship with Business Analysts.
b) Ensure that every user story is specific to what you want to tell the user. It should also be testable and include an acceptance criteria.
c) Don’t ignore non-functional testing such as load, performance and security testing. Make sure that we do both functional and non-functional testing from the very start of the project.
d) Build meaningful end-to-end test scenarios by utilizing trends, data and analytics from the existing product to gather information about user activities and user journeys through the application.
e) Build a strong testing/QA practice which drives development. Define an Agile QA Testing Strategy and adopt tools which have a good amount of customization available for the need of your project.
f) Conduct regular QA workshops within the team where the testers can improve their technical skills as well as soft skills.
g) Take advantage of technical architecture diagrams, models of the application and mind maps to implement suitable test techniques. 
h) Embed QA within the teams with appropriate owners, so that they are aware of any changes to the application.

4. Implement SBTM – Session Based Testing Management:

SBTM: Session-Based Test Management – It is “a method for measuring and managing exploratory testing.”  In a nutshell, it is a Test Management Framework which is made for effective exploratory testing and finding results without executing test cases.

Exploratory testing is always unscripted, unrehearsed testing. Its effectiveness purely depends on several intangibles: the skill of the tester, their intuition, experience, and ability to follow hunches and look for unexplored areas. It’s these intangibles that often confound test managers when it comes to being accountable for the results of the exploratory testing performed.

For e.g. at the end of the day, when a team lead or manager asks for the status from an exploratory tester, they may get an answer like “Oh, you know… I tested some modules here and there, just looking around as of now.” Even though the tester may have filed several bugs, the manager might have no idea what they did to find them. If the lead or manager was skilled enough to ask the right questions about what the tester did, the tester may have forgotten the details or may not be able to describe their finding in a quantifiable manner.

For this very problem, there is a framework named Session Based Test Management (SBTM). This framework can be used in the form of a template (Session Metrics – Derived from the sessions performed) which will document the effort spent on the exploratory testing. This documentation is named as session metrics which are the primary means to express the status of the exploratory test process. This contains the following elements: 

  • Number of sessions completed or rounds of testing performed
  • Number of problems or issues found
  • Functional areas covered such as the modules, pages or components
  • Percentage of session time spent on setting up for testing, which is the average time of setting up the test data and test environment if any authoring or content sync is required
  • Percentage of session time spent testing which is the actual time spent on testing the mentioned modules, components, etc.
  • Percentage of session time spent investigating problems which is the investigation time spent on each bug or an issue 
5. Effective use of severity and priority: 

Defect tracking is one of the most important aspects of the defect lifecycle. This is important because the test teams open several defects when testing a piece of software, application or website which will only multiply if the system under test is complex. In these scenarios, managing the defects and analyzing these defects to drive closure can be a formidable task.

It is a useful practice to add Severity & Priority to each of the bugs entered in the system for a project. It helps all the stakeholders to track and maintain the bugs and prioritize the fixes accordingly.

However, this is a very confused concept and almost used interchangeably by test teams as well as development teams. There’s a fine line between the two (Severity & Priority) and it’s important to understand that there are indeed differences between the two.

Let’s have a quick look at how these two differ from each other.

defining test severity and priority

“Priority” is associated with scheduling the fix, and “severity” is associated with standards.

“Priority” signifies that something is important and deserves to be attended to before others.

“Severity” is the state of being marked by strict adherence to rigorous standards or high principles.

The words priority and severity always come up in bug tracking. Determining the severity of a bug helps the development team to prioritize the bugs to be fixed. The priority status is often used by the product owners to determine which bug needs attention before going live.

You can find a range of software tools for commercial, problem tracking or management. These tools, with the detailed input of software test engineers, give the team complete information so that the developers can understand the bug, get an idea of its ‘severity’, reproduce it and fix it.

The fixes are based on project ‘priorities’ and ‘severities’ of bugs.

The ‘severity’ of a problem is defined in accordance with the customer’s risk assessment and recorded in their selected tracking tool.

A software that contains bugs can affect your release schedules leading you to reassess and renegotiate the project priorities.

6. Defining best test strategy and test planning 

This is undoubtedly the most important of the best QA testing practices that every QA team should perform regardless of the type of project. There are standard ways to define Test strategy and Test plan that most companies are using to mention the pointers in the document and then dumping the same in the repository and never referring to the same.

We should not only define the Test strategy and Test plan document but also review it constantly with our clients and update them according to the different phases of the project.

an architectural plan with ruler and pen
Clear planning, regular update and active implementation ensures effective testing.

How we like to use Test plan document is:

  • Master test plan: A high-level test plan for a project or product that unifies all other test plans or contains all the information about the project and gets updated on regular basis. It’s basically like an encyclopedia of the project from testing perspective.
  • Testing level specific test plans/Low level test plans: Plans for each level of testing or each module of the project.
    – Unit Test Plan
    – Integration Test Plan
    – System Test Plan
    – Acceptance Test Plan
    – Story/Module Test plan
  • Testing type specific test plans: Plans for major types of testing like Performance Test Plan and Security Test Plan and Regression Test Plan.


    Test Plan Guidelines

    1. Make the plan concise. Avoid redundancy. If you contemplate that you do not need a section that has no use in your project, go ahead and delete that section from your test plan.

    2. Be specific to every detail you provide in the test plan. For e.g., when you specify an operating system as a property of a test device, mention the OS Edition/Version as well, not just the OS Name.

    3. Make good use of lists and tables wherever possible. Avoid lengthy paragraphs and convert your information into bullet points.

    4. Make sure that the test plan is reviewed by all the stakeholders several times prior to baselining it or sending it for approval. The quality of your test plan will entail the quality of the testing you or your team are going to perform.

    5. Update the plan as and when required. An outdated and unused document is worse than not having the document in the first place. 
7. Writing test scenarios instead of test scripts: 

There are a lot of terms and phrases used in the world of software testing. From a wide range of different testing methods, to a variety of testing types, and many test case templates that make up a software or application test, it can be hard to remember what exactly each term means. We like to keep things simple at Argil DX and just focus on the core of testing. Take for example “Test Cases” and “Test Scenarios”; what’s the difference and why are they needed?

In brief, a Test Scenario is what to be tested and a Test Case is how to be tested. Moreover, a test scenario is supposed to be a collection of test cases. 

a table representing a test scenario being occupied by beakers representing test cases
Defining test scenarios and test cases helps focus on the core of testing.

How does it help?

Test Scenario
The purpose of writing test scenarios is to test the end-to-end functionalities of a software application, to ensure the business processes and flows are functioning as needed. In scenario testing, the tester thinks like an end-user and determines real-world scenarios (use-cases) that can be performed. Once these test scenarios/use cases are determined, then these test scenarios can be converted into test cases for each scenario. Test scenarios are the high-level concept of what to test, covering the major functionality of any module of a project.

Test Case
The test cases are a bunch of steps to be executed by a tester in order to validate the test scenario. Wherein test scenarios are derived from use cases, the test cases are derived and written from those test scenarios. A test scenario can have multiple test cases associated with it, because test cases lay out low-level details on how to test those scenarios.

Example
Test Scenario: Validate the login page of a certain application.

Test Case 1: Enter a valid/invalid username and password
Test Case 2: Reset your password or click on the reset password CTA. (Call to Action – Button)
Test Case 3: Enter invalid credentials in both the fields 

So, these are the seven best QA testing practices that we follow at Argil DX.

Read about the implementation of Software Testing Life Cycle (STLC) here. You might also be interested in reading our Testing Guide for improving website performance. For any queries related to software QA testing principles or performance testing and enhancement, reach out to us. We’d be glad to help you!

The post 7 Best QA Testing Practices for Agile Software Delivery appeared first on Argil DX.

]]>
https://www.argildx.us/technology/best-practices-qa-testing-agile-software-delivery/feed/ 0
How to Create and Enable Dynamic Templates in AEM https://www.argildx.us/technology/how-to-create-and-enable-dynamic-templates-in-aem/ https://www.argildx.us/technology/how-to-create-and-enable-dynamic-templates-in-aem/#comments Tue, 28 Jan 2020 13:01:18 +0000 https://www.argildx.us/?p=7496 A new feature called Dynamic Templates was introduced in AEM 6.2. This feature is also known as Editable Templates in AEM for the obvious reason that these templates can be created and edited at any time.  Generally, the control of creating and structuring templates is given to more specialized author groups called “Template Authors”.  The diagram below ... Read more

The post How to Create and Enable Dynamic Templates in AEM appeared first on Argil DX.

]]>
A new feature called Dynamic Templates was introduced in AEM 6.2. This feature is also known as Editable Templates in AEM for the obvious reason that these templates can be created and edited at any time. 

Generally, the control of creating and structuring templates is given to more specialized author groups called “Template Authors”. 

The diagram below depicts the requirements to create a content page.

the different stages and roles involved in content page creation

In order to fulfil these requirements, the collaboration of roles below is important:

1. Developer concentrates on development of page components and template types and provides necessary information to template author. 
2. Template Author is responsible for creating templates using template types, configuring use of components on templates.
3. Content Author is responsible for creating and editing content pages using templates.

Dynamic templates provide tremendous capabilities and flexibility in configuring templates
  1. Rather than developers, template authors can create and edit templates.
  2. Provides template authors with touch optimized UI “Template Console” for managing the lifecycle of templates and “Template Editor” to configure templates.
  3. For any page a content author creates using a dynamic template, template authors need to configure three parts of the template:
    Structure: Define components and content for template
    Initial Content: Define content visible on page when it is first created
    Policies: Define design properties of a component
  4. Dynamic connection is maintained between the page and template, modification in structure will reflect on all pages created using the template, whereas modification in initial content is reflected only on the new pages created using the template.
  5. All the dynamic templates are stored under /conf
  6. Design mode is not offered on pages created using dynamic templates. This can be achieved by creating component policies in template editor.
  7. Provides concept of Page Policy to define specific client-side libraries to be loaded on each template.

Let’s jump into implementation and see how to create dynamic templates in AEM.  

Step 1: Activate Editable Templates in your project 

With editable templates, templates are stored under /conf. In order to create project specific template hierarchy under conf, we can use configuration browser.

  1. Navigate to Tools > Configuration Browser
  2. Click on Create 
a demonstration of creating editable templates configuration in configuration browser

3. On Create Configuration dialog, configure:

  • Title: Enter title with your project name 
  • Editable Templates: Check this option

4. You can see below generated template structure on CRXDE:

Step 2: Create a new Page Component
  1. Open CRXDE Lite User interface : http://<host>:<port>/crx/de/index.jsp
  2. Navigate to page components folder of your project
  3. Create a new page component with following details: 
page component creation step

Rename base-page.jsp created below base-page component to base-page.html 

Step 3: Next step is to create template type
  1. Navigate to /conf/argildx-project/settings/wcm/template-types
  2. Create a node with name “empty-page” and Type “cq:Template” 

3. Create jcr:content node under empty-page with below properties

 <jcr:content 
        jcr:primaryType="cq:PageContent" 
        jcr:description="ArgilDX Empty Template for generic pages" 
        jcr:title="ArgilDX Empty Page"/>  

4. Create initial node of type cq:Page under empty page
5. Create jcr:content node under initial with below properties 

 <jcr:content 
        jcr:primaryType="cq:PageContent" 
        sling:resourceType="argildx-project/components/structure/base-page"/> 
</jcr:content>  

6. Create policies node of type cq:Page under empty page
7. Create jcr:content node under policies with below properties 

 <jcr:content 
        jcr:primaryType="nt:unstructured" 
        sling:resourceType="wcm/core/components/policies/mappings"> 
</jcr:content>  

8. Create structure node of type cq:Page under empty page
9. Create jcr:content node under structure with below properties 

 jcr:content 
    cq:deviceGroups="[mobile/groups/responsive]" 
    jcr:primaryType="cq:PageContent" 
    sling:resourceType="argildx-project/components/structure/base-page"> 
    <root 
        jcr:primaryType="nt:unstructured" 
        sling:resourceType="wcm/foundation/components/responsivegrid"/> 
</jcr:content>  

Template type is all set and hierarchy should look as shown below. 

template type hierarchy
Step 4: Now template authors can go ahead and create templates using template type 
  1. Go to Tools > Templates > Go to your project(argildx-project) > Click on create from Toolbar
  2. You can see created ArgilDX Empty Page template type. Select this Template Type and click on Next 

4. Provide suitable Title and Description and Click on Create.
5. Click on Open from confirmation dialog to view the template.
6. Template contains layout container to drag and drop components as shown below: 

Note: This layout container is picked from structure node (wcm/foundation/components/responsivegrid) of template. For your custom page component to pick components from structure node, add below code in the body of your page component.

 <sly data-sly-use.templatedContainer="com.day.cq.wcm.foundation.TemplatedContainer" 
       data-sly-repeat.child="${templatedContainer.structureResources}" 
       data-sly-resource="${child.path @ resourceType=child.resourceType, decorationTagName='div'}"/>  

NoteAdd below code in the head section of your page component to enable tab view of components in structure mode when layout container is unlocked:

 <head data-sly-use.clientlib="/libs/granite/sightly/templates/clientlib.html"> 
<sly data-sly-                  call="${clientlib.all@categories='wcm.foundation.components.parsys.allowedcomponents'}"/> 
</head> 
  
Step 5: Working with templates  

There are three parts to be built in each template: 

  1. Policies define design properties of a component i.e. allowed components and its design.
  2. Initial defines default components and content to be available when a page is first created using this template.
  3. Structure defines the structure of a template.

5.1) Template Policies

When a page is created using dynamic templates, content authors will not have design mode in page editor. Now template author defines content policies in template editor to persist the design properties of components.

There are two types of policies which we can define: 

a. Template Level Policy (Page Policy):

– It allows template authors to define specific client libraries to be loaded on each template, it reduces unnecessary calls to libraries.
– Template author has control to add or remove client libraries rather than developers.

  Let’s check out how we can make use of it: 

  1. Create a client library folder with name clientlibs (can use any name) of type cq:ClientLibrary under page component with unique categories value defined. 
  2. Add required client-side libraries to be loaded on base page. 
UI for adding required client side libraries to base page

3. Now we have to apply Page Policy on the new template created.
4. Open newly created template.
5. Click on Toggle button on toolbar and Click on Page Policy

steps for applying page policy to the newly created template

6. On the left side Policy Dialog, we have
– Select Policy to select any existing policy.
– In Policy Title, provide title Base Policy
7. On the right-side, Properties dialog we have
– Client-Side Libraries field, add the categories name defined in the properties of the clientLib created in step 1, i.e argildx.base
– Click on submit

choosing a policy and adjusting its settings to configure the component

8. On Submit, you can see that an alert message pops up on template which was added in the alert.js file of clientlib(argildx.base)

popup message after submitting desired policy for the template

Note: For your custom page component to pick template level policies, add below code in the head section of your page component (base-page.html)

<head data-sly-use.clientlib="/libs/granite/sightly/templates/clientlib.html">
    <sly data-sly-test.clientlibCategories="${currentStyle['clientlibs']}"
         data-sly-call="${clientlib.all @ categories = clientlibCategories}"></sly>
</head>

b. Component Level Policy

– It allows authors to define design properties for each parsys.
– To add policies on components using design dialog.

Let’s configure Policy on layout container:
1. On the Template, select the layout container and click on policy toolbar action shown below:

selecting layout container on template

2. On the left side Policy Dialog, we have
– Select Policy to select any existing policy.
– In Policy Title, provide any suitable title. Ex: General group
3. On the left side Properties Dialog, we have
– Allowed Components which has list of components and group that can be selected to be made available on selected layout container
– Select any group (General)
4. Click on submit

adjusting the settings of the selected policy

5. Now you can see that selected group of components can be added in the layout container.

adding selected group of components in the

5.2) Lock/unlock Components:

1. Components can be locked/unlocked using Lock button next to policy button on toolbar. This feature defines whether content is available for modification in initial content mode.
2. Locked components remain in structure mode and can’t be edited on resulting pages.
3. Unlocked components will be available under initial content mode and can be edited on resulting pages.

feature for locking and unlocking components

Note: After pages are created using the template, if structure of same template is updated then these pages will be affected by changes.

5.3) Initial Content
In this mode, define default components and content to be available when a page is first created using this template. Unlock the layout container available in structure mode to be able to configure it in initial content mode.

In the below example, I have added Title component and authored with title “Base Page Title”

adding title component as base page title

Note: After pages are created using the template, if initial content of same template is updated then these pages will not be affected by changes.

5.4) Design Dialog for components at Template Level

Configurations with design dialog of a component can be edited by clicking on policy as shown in below diagram:

editing configurations with design dialog of a component
Step 6: Enable Template from Console

Finally, once template is ready to be used for creating pages:
1. Select newly created ArgilDX Base Template from Template Console
2. Click on Enable from toolbar.
3. Click on Enable button from confirmation window.

selecting newly created base template and enabling it

Note: In case a template is marked as disabled after creating some pages with it, then existing pages remain unaffected. However, the template will no longer be available for creating new pages until it is enabled again.

Step 7: Once Enabled, Publish Template

1. Select newly created ArgilDX Base Template from Template Console
2. Click on Publish from the toolbar.

Step 8: All we must do is Allow Template under root page of our website to use it

1. Open page properties of root page of your website
2. Click on Advanced Tab
3. Under Template Settings, click on add to define Allowed Templates path
Eg: /conf/<your-folder>/settings/wcm/templates/.*

allowing newly created templates under root page of our website

Now, you have successfully created templates and made it available for content authors to create pages.

The post How to Create and Enable Dynamic Templates in AEM appeared first on Argil DX.

]]>
https://www.argildx.us/technology/how-to-create-and-enable-dynamic-templates-in-aem/feed/ 3
Full-Text Search in AEM Pages and Assets including PDF, Excel and PowerPoint https://www.argildx.us/technology/fulltext-search-in-aem-pages-assets-custom-search-pdf-excel-and-powerpoint/ https://www.argildx.us/technology/fulltext-search-in-aem-pages-assets-custom-search-pdf-excel-and-powerpoint/#comments Wed, 23 Oct 2019 12:26:54 +0000 https://www.argildx.us/?p=7450 Search is an important feature of any website. Implementing an efficient search on your website can considerably improve the experience of your visitors. For websites on AEM, creating a custom search component without creating any new indexes has been a challenge.  We took up the challenge  We created Full-Text Search – a custom search component to help end users search ... Read more

The post Full-Text Search in AEM Pages and Assets including PDF, Excel and PowerPoint appeared first on Argil DX.

]]>
Search is an important feature of any website. Implementing an efficient search on your website can considerably improve the experience of your visitors. For websites on AEM, creating a custom search component without creating any new indexes has been a challenge. 

We took up the challenge 

We created Full-Text Search – a custom search component to help end users search through all your web pages and published assets. This includes searching through PDFs, Excel files, PowerPoint presentations, asset metadata and SEO tags. This is a generic search component which can be used to search within any content and DAM hierarchy.

As compared to the OOTB search component of AEM, the custom search component does a full sentence search instead of individual words of sentences. For asset search it can even provide the page number in which the text is present.

The objective behind creating a custom search component

To create a search component in AEM to enable users to search any word, number or sentence. Even special characters in AEM website pages as well as DAM assets (PDFs, Excel files, PowerPoint presentations). 

The approach taken to create our AEM Search component 

We used Omnisearch API with QueryBuilder, which in turn uses Lucene indexes to perform effective and efficient searching. 

Prerequisites for creating Full-Text Search in AEM

For efficient searching, please validate your AEM instance has the following nodes. 

  1. /oak:index/lucene
  2. /oak:index/cmLucene
  3. /oak:index/damAssetLucene
  4. /oak:index/nodetype
  5. /oak:index/cqPageLucene
How to implement our Full-Text Search component in your AEM instance?
  1. Create a component with search directory as dialog and text fields along with submit button on display layer.

    The component dialog will look like this 
component dialog box for full text search in aem

The basic UI will look like this but you can customize it the way you want.

basic UI of component with search directory

2. Add AJAX call on submit button click which sends search string and search location. 

3. Create a servlet which gets the search parameters: 

  1. Search string
  2. Search location 

4. Create a query using ‘QueryBuilder’ to perform the search. 

5. Parse the result in required format.

6. Send the response in JSON. 

7. Parse the result on screen. 

Advantages of our Custom Search Component

Through Full-Text Search, you can improve the user journey on your AEM website as users can find the specific item they’re looking for. Additionally, this custom search component will help you in site personalization as you can implement a user-permission based search. You can even integrate analytics with this search to understand your users’ demands at a more granular level.

Check out other articles in our blog to learn about the different tools and features that we’ve created around and for different Adobe Experience Cloud solutions.

The post Full-Text Search in AEM Pages and Assets including PDF, Excel and PowerPoint appeared first on Argil DX.

]]>
https://www.argildx.us/technology/fulltext-search-in-aem-pages-assets-custom-search-pdf-excel-and-powerpoint/feed/ 3
Smart Crop: Intelligent Image Crop in AEM https://www.argildx.us/technology/smart-crop-intelligent-cropping-for-aem-assets/ https://www.argildx.us/technology/smart-crop-intelligent-cropping-for-aem-assets/#respond Mon, 19 Aug 2019 12:03:51 +0000 https://www.argildx.us/?p=7274 What makes websites vibrant and attractive across all touchpoints? When your images, videos and other assets are well customized and come together like the pieces of a solved puzzle, your website experience is elevated. Images optimized for all target devices improve the responsiveness of your website. If your site is on AEM, the different image features are ... Read more

The post Smart Crop: Intelligent Image Crop in AEM appeared first on Argil DX.

]]>
What makes websites vibrant and attractive across all touchpoints? When your images, videos and other assets are well customized and come together like the pieces of a solved puzzle, your website experience is elevated. Images optimized for all target devices improve the responsiveness of your website. If your site is on AEM, the different image features are automatically handled by AEM DAM. However, the limited image rendition capabilities of DAM might make your image lose its effectiveness on certain devices. 

DAM Asset Update Workflow creates only three renditions for uploaded images, which are not always enough in real-life projects where you need banner images, carousel images, card images, thumbnails, etc. This workflow can be updated to include some custom renditions but a lot of effort is required. 

Smart Crop helps you improve your image renditions to create responsive designs. It is an intelligent image handling feature in AEM that crops your images while preserving the area of interest. 

Avoid Custom Workflow and the Manual Tasks of Creating DAM Image Renditions

We’ve all faced the hassle of using inefficient tools to crop images exactly the way we want. The custom algorithms let you crop only in fixed combinations like from the top left, etc. In the process, your relevant logos, banners and other elements in the background might get cropped. You are left to manually grind for minutes and sometimes hours trying to get the image right for different small-screen devices and card sizes. Even harder if you are working on a bulk quantity. 

Smart Crop is a rockstar in such scenarios. This feature lets you easily create custom renditions for image cards or banners by preserving the focal point in the image. Authors can now edit more images quickly without all the hassle of custom workflow steps or tedious manual image adjustments.

So, What’s this Smart Crop Feature? 

Smart Crop leverages Adobe Sensei technology to automatically crop the images based on the focal point of the image. For example, if an image has a mountain view and a person, and you want them in your final cropped image. Through this image crop feature, all your renditions will contain the person (or part of the person) and some parts of the mountain depending on the dimension. This feature helps in creating quality experiences for users on all touchpoints. Adobe Sensei automatically judges the image and tries to find points of interest by preserving the important parts of the image. It also enables authors to adjust the smart cropped renditions in place to fit the target devices. 

How to use Smart Crop

To smart crop an imageyou’ll need to start your AEM instance in Dynamic media scene7 mode. 

To know how to start dynamic media in S7 mode, refer this

The following demonstration is on AEM 6.4 SP3

This image crop feature automatically works from Scene7 server by delivering the necessary image for the screen size. 

To create smart cropped images, you need to create an image profile for Smart Crop.

1. Navigate to Tool > Assets > Image profiles the different tools available in the assets section in an adobe experience manager instance

 

2. Create a new image profile. Select Smart Crop as Type in Cropping Options. By default, it gives the dimensions of large, medium and small sizes.demonstrating the process of creating a smart crop profile by using available options in AEM

 

3. You can create custom responsive image crop by introducing extra-small (for cards) or extra- large image sizes (for banner or carousel images) and provide respective dimensions for them. We can also choose an option for Color and Image Swatch (for products). This automatically detects the prominent color in the image and creates a swatch for the product. the cropping options, image sizes and colors available to authors in AEM

 

4. After creating the Image Profile, you need to apply this profile to a folder where all the images will be uploaded. Select the Image Profile and click on Apply Processing Profile to folder(s). Select folder(s) and click Apply. applying the image profile to a folder along with attributes and different actions like deleting and editingdifferent folders created in terms of qualities to arrange images

 

5. The images uploaded in that folder will generate smart crop renditions. a smart crop rendition of a woman enjoying the ocean's view while walking on the beach with a surfboard

 

6. Smart Cropped renditions can also be manually adjusted in the editor to further customize the view of the renditions. Click on Smart Crop option in the menu to open smart crop edits. small, medium and large size versions of smart crops of a woman beach walking

 

7. You can use these images on the page in Dynamic Media component from Dynamic Media group. Drag and drop this component and add the relevant image (with smart cropped renditions) in it. Open the dialog and go to Dynamic Media Settings tab and choose Smart Crop in Preset Type. You can also add Image Modifiers i.e. extra parameters to be sent to Scene7 server for some image effects. preset types in dynamic include smart crop, viewer preset and image preset

How will Your Organization Benefit from Using this Intelligent Image Crop Feature? 
  • Smart crop can either be applied to a single asset or used for bulk editing of assets in a folder. 
  • Authors can customize smart cropped renditions for better coverage of focal points using editing layout. 
  • It can also detect available bandwidth and screen size of the device and can optimize the images for delivery. This can dramatically reduce the size of the file. 
  • Smart cropped images rendered via dynamic media are smooth, with no quality loss and can drive enriched user experience with faster loading. 
  • An added advantage of smart cropped images is that they can be used outside AEM as well. 
Challenges with Smart Crop 

While using smart crop feature in a few projects, we found it challenging to use this feature in custom components of the project. Use this feature in dynamic media component for best results. Design the custom components like dynamic media component to use the smart crop feature. 

Another pre-requisite for images to be smart cropped is that the original uploaded image should have dimensions greater than as mentioned in the image profile. In other case, extra white space may be added for fitting the image profile dimensions. 

Prospects of Smart Crop 

Smart Video Cropping is also being introduced in AEM 6.5 to enable users to create suitable video sizes that fit different devices by keeping the focal point in the frame. Adobe Sensei will automatically track the point-of-interests and crop the images to optimize it for all devices. 

So, Adobe Sensei has empowered authors by introducing artificial intelligence and machine learning through Smart Crop to enrich media and content on websites. 

 

Look at other ways to improve your brand digital experience using AEM through integrations like AEM-DTM-Target integration.

The post Smart Crop: Intelligent Image Crop in AEM appeared first on Argil DX.

]]>
https://www.argildx.us/technology/smart-crop-intelligent-cropping-for-aem-assets/feed/ 0
On-Deploy Scripts – an ACS AEM Commons Utility https://www.argildx.us/technology/on-deploy-scripts-an-acs-aem-commons-utility/ https://www.argildx.us/technology/on-deploy-scripts-an-acs-aem-commons-utility/#respond Mon, 05 Aug 2019 13:52:22 +0000 https://www.argildx.us/?p=7224 Developers need to perform certain tasks during deployment to AEM server. Component reauthoring is one of the major tasks for a developer making changes to existing components. In this article, we’ll talk about using on-deploy scripts to simplify reauthoring tasks in AEM. Problem Statement: If the component is to be configured on one or two ... Read more

The post On-Deploy Scripts – an ACS AEM Commons Utility appeared first on Argil DX.

]]>
Developers need to perform certain tasks during deployment to AEM server. Component reauthoring is one of the major tasks for a developer making changes to existing components. In this article, we’ll talk about using on-deploy scripts to simplify reauthoring tasks in AEM.

Problem Statement:

If the component is to be configured on one or two pages, then reauthoring is simple. However, if the component needs to be configured on many pages, then it’s tedious for an author to reauthor the same component on every page.

This is one of the scenarios in which we can use on-deploy script. Here is a list of problems we will try to solve.

What Problems are We Trying to Solve?

  • Running ad-hoc script manually
  • Update the content without reauthoring
  • Admin update that needs to do on all server
  • Mass content updates
  • One-time content creation
  • Removing obsolete content
  • Deleting all nodes of a particular resource type

Available Methodology to Solve the Above Problems

  1. Using Servlet
  2. Using Ad-hoc scripts

Issues with Servlet or Ad-hoc Scripts

  • Security Risk
  • Password Expose
  • Anyone can execute
  • Server remains in broken state till we run Ad-hoc script or if it fails
  • Making sure you run scripts on deployment else existing functionality breaks
  • Keeping track of scripts that have been run becomes difficult

ACS AEM Commons provides a useful utility called “On-Deploy Scripts,” which help developers create one-time scripts that execute upon deployment to an AEM server. This feature is available for v3.15.0 and higher versions of ACS-AEM Commons.

So, to overcome this problem we can use On-Deploy Scripts.

Advantages of On-Deploy Scripts:

On-Deploy Scripts allows developers to create one-time scripts that execute upon deployment to an AEM server. Some advantages of this one-time script creation feature are given below.

  • Fully automated
  • Continuous integration gives you automatic testing on Pre-prod
  • Scripts execute along with deployment – more broken state
  • Automation removes chances for error
  • Script status is preserved to be validated later
  • Available with version v3.15.0

How To use:

  1. Install v3.15.0 or higher version ACS-AEM Commons
  2. Enable on-deploy script feature
    • Create OSGi configuration

“com.adobe.acs.commons.ondeploy.impl.OnDeployExecutorImpl.xml”

<?xml version="1.0" encoding="UTF-8"?>
 <jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0"
 xmlns:jcr="http://www.jcp.org/jcr/1.0"  jcr:primaryType="sling:OsgiConfig"/>
  1. Implement Script Provider Service
    • Override the getScripts () method.
@Component(immediate = true, service= OnDeployScriptProvider.class, property = { "name =service.description", "value =Developer service that identifies code scripts to execute upon deployment"})
public class OnDeployScriptProviderImpl implements OnDeployScriptProvider {
    @Override
    public List<OnDeployScript> getScripts() {
        return Arrays.asList(
                // Add Script instances here e.g. new Script1(), new Script2(), new Script3()
        );
    }
}
  1. Create Script and Add Script Provider
    • Override execute () method
public class ScriptUpdateProperty extends OnDeployScriptBase implements OnDeployScript {
    @Override
    protected void execute() throws Exception {
        Node nodeUpdate = getOrCreateNode("content/we-retail/us/en/onDeployePage/jcr:content","cq:Page','cq:PageContent");
        nodeUpdate.setProperty("sling:resourceType","weretail/components/structure/page");
    }}
  1. Make sure acs-commons-on-deploy-scripts-service user has all necessary permissions
  2. Just install the package
  3. Check status at /var/acs-commons/on-deploy-script-status on crxde.

Demo:

A working demo present on bitbucket https://bitbucket.org/argildx/aemdev-meetup-s1/overview

Read our article about another ACS AEM Commons Utility, Ensure Service User

The post On-Deploy Scripts – an ACS AEM Commons Utility appeared first on Argil DX.

]]>
https://www.argildx.us/technology/on-deploy-scripts-an-acs-aem-commons-utility/feed/ 0
Ensure Service User – An ACS AEM Commons Utility https://www.argildx.us/technology/ensure-service-user-an-acs-aem-commons-utility/ https://www.argildx.us/technology/ensure-service-user-an-acs-aem-commons-utility/#respond Thu, 18 Apr 2019 07:51:19 +0000 /?p=6327 The post Ensure Service User – An ACS AEM Commons Utility appeared first on Argil DX.

]]>
Prior to AEM 6.2, developers used administrative resource resolver to access JCR programmatically. Now, Ensure Service User in AEM will make the process less error prone and easier to manage.

Problem Statement: 

Let us assume that a developer needs to access the resource from content path programmatically. Before AEM 6.2, administrative resource resolver was one of the ways to achieve the same. In addition to providing access to the content path(s), administrative resource resolver provides access to the complete repository thereby making the application prone to making undesired changes (e.g. nodes under /libs, /apps etc.). 

Since AEM 6.2, administrative resource resolver has been deprecated by Adobe and to address the above issue Adobe introduced “Service User”.  

Service User: 

A service user is a JCR user with no set password and limited privileges necessary to perform a specific task. No password requirement means that it will not be possible to login with a service user. 

JCR is accessed using service users instead of the administrative resource resolver. 

Below are two plus points about Service User. 

  • A service user per bundle with a limited set of permissions. 
  • Does not have password so no one can login. 
Problems With Service User: 
  • Manual activity on every server. 
  • Assign permission on every server. 
  • Management of these permissions is prone to error. 

Again, there is lot of manual work for developers to set up service user in different environments. 

To overcome this problem, ACS Common provides a different way to bootstrap AEM projects with different functionality, a set of reusable components and AEM development kit.  

Here, we are going to discuss great utilities of ACS common named “Ensure Service User”.  

 

Ensure Authorizable or Ensure Service User: 

Ensure Service User is provided by ACS commons. Here is some points about Ensure Service User. 

  • Create once and use everywhere. 
  • Define service user and their ACL in OSGi configuration. 
  • Less error prone and easy to maintain. 
  • You can manage the group and hierarchies for service user. 
  • Available since version v3.8.0. 

 

How to use: 

Here step by step procedure are described to use “Ensure Service User”. 

  1. Create an OSGi configuration for service user using factory configuration 

com.adobe.acs.commons.users.impl.EnsureServiceUser-meetupUser.xml 

<?xml version="1.0" encoding="UTF-8"?> <jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0" xmlns:cq="http://www.day.com/jcr/cq/1.0"   xmlns:jcr="http://www.jcp.org/jcr/1.0" xmlns:nt="http://www.jcp.org/jcr/nt/1.0"   jcr:primaryType="sling:OsgiConfig"   principalName="meetup-service-user"   type="add"   ensure-immediately="{Boolean}true"   aces="[type=allow;privileges=jcr:read\,rep:write;path=/content]"/> 

Here please note followings points: 

Principal Name: 

Principal name is name of your service user. It can be just principal name or principal name with relative path or principal name with absolute path.  

Remember, service users may only exit under /home/users/system” path. 

For example: 

     1. Your Inputmeetup-service-user  

Service User will be created under path: “/home/users/system/meetup-service-user” 

     2. Your Input: /my-company/meetup-service-user or my-company/meetup-service-user or /my-company/meetup-service-user 

Service User will be created under path: “/home/users/system/my-company/meetup-service-user” 

     3. Your Input: /home/users/system/my-company/meetup-service-user 

Service User will be created under path: “/home/users/system/my-company/meetup-service-user” 

Note: 

  • If a system user exists with the same principal name at a DIFFERENT location, this tool assumes that service user is correct and not attempt to move it to the specified location in this configuration. 
  • If a principal name is specified for an AEM or ACS AEM Commons provided system user, the ensure user process will fail. This list may not always be exhaustive and up to date and meant to help protect against collisions. 
Type:  

ACS AEM Commons provide the facility to add or remove Service User. Here we are creating the service user. 

Option “add”: Ensure the existence of service user. 

Option “remove”: Ensure that service user is removed. 

Ensure-Immediately:  

Two options are available, we can set it either true or false. By default, it is true. 

Option “true”: When set to true, the insurance is performed whenever this bundle is loaded. 

 

Aces:  Array of ACE (access control entry) definitions to ensure for the principal. 

Format: type=allow;privileges=jcr:read,rep:write;path=/content/foo;rep:glob=/jcr:content/* 

  • type: allow OR deny, it is required property. 
  • privileges: comma delimited list of valid JCR privileges, it is required property. 
  • path: absolute content path which the ACE will be applied, it is required property.

  

     1. Service user mapping with OSGi Service. 

Map your service user with Resource resolver service. 

org.apache.sling.serviceusermapping.impl.ServiceUserMapperImpl.amended-meetupUser.xml 

<?xml version="1.0" encoding="UTF-8"?><jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0" xmlns:jcr="http://www.jcp.org/jcr/1.0"  jcr:primaryType="sling:OsgiConfig"  user.mapping="[com.argildx.argildx-meetup:ResourceResolverUtil=meetup-service-user]"/> 
     2. Use this service to get resource resolver instance. 
public class ResourceResolverUtil { 

  /** The resource resolver factory. */   

@Reference private transient ResourceResolverFactory resourceResolverFactory; 

  /** Gets the resource resolver. 

   * @return the resource resolver  */ 

public ResourceResolver getResourceResolver() { 

    ResourceResolver resourceResolver = null; 

    try {      final Map<String, Object> authInfo = Collections.singletonMap(ResourceResolverFactory.SUBSERVICE, (Object) "ResourceResolverUtil");       

resourceResolver = resourceResolverFactory.getServiceResourceResolver(authInfo);     

} catch (LoginException le) { 

}     

return resourceResolver;   

}} 

 
Demo: 

A working demo present on bitbucket can be viewed on the following  link: https://bitbucket.org/argildx/aemdev-meetup-s1/overview 

The post Ensure Service User – An ACS AEM Commons Utility appeared first on Argil DX.

]]>
https://www.argildx.us/technology/ensure-service-user-an-acs-aem-commons-utility/feed/ 0
Sling Dynamic Include (SDI): Dynamically Include Page Components https://www.argildx.us/technology/sling-dynamic-include-sdi/ https://www.argildx.us/technology/sling-dynamic-include-sdi/#respond Mon, 21 Jan 2019 08:26:09 +0000 /?p=5723 In CQ or AEM, most of the pages remain static. Hence, caching of the pages is very useful with dispatchers or any other available AEM plugins/connectors. Imagine a scenario where homepage of news agency must show the hot news which is different for different regions, however, because of caching it is displaying the same news ... Read more

The post Sling Dynamic Include (SDI): Dynamically Include Page Components appeared first on Argil DX.

]]>
In CQ or AEM, most of the pages remain static. Hence, caching of the pages is very useful with dispatchers or any other available AEM plugins/connectors. Imagine a scenario where homepage of news agency must show the hot news which is different for different regions, however, because of caching it is displaying the same news in all the regions. Strange! To rescue from these, live scenarios, the application may require certain elements/components of the page to be dynamically included. In AEM, Sling Dynamic Include (SDI) provides this functionality.

 

Let’s elaborate SDI integration with AEM 6.4, Dynamic Include 3.0.0 and Dispatcher 2.4.

 

Please note that Step 1 and Step 2 need to be performed on publish instance.

 

Step 1:

 

Install Sling Dynamic Include Bundle using the following steps:

  1. Download the Dynamic Include bundle.
  2. Open the bundles using http://<host>:<port>/system/console/bundles
  3. Click on install/update button in the right corner of the screen

      4. Check the Start Bundle checkbox and browse the location where the bundle is downloaded and Click on install/update button

Once the installation of the bundle is completed, verify it by searching Dynamic Include. It should be in an active state.

Step 2:

 

After installation of the SDI bundle, the next step is to configure the component to be dynamically included.

<?xml version="1.0" encoding="UTF-8"?>
<jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0"xmlns:cq="http://www.day.com/jcr/cq/1.0"
    xmlns:jcr="http://www.jcp.org/jcr/1.0"xmlns:nt="http://www.jcp.org/jcr/nt/1.0"
    jcr:primaryType="sling:OsgiConfig"
    include-filter.config.enabled="{Boolean}true"
    include-filter.config.path="/content"
    include-filter.config.resource-types="[my-app/components/content/dynamic_included_component]"
    include-filter.config.include-type="SSI"
    include-filter.config.add_comment="{Boolean}false"
    include-filter.config.selector="nocache"
    include-filter.config.ttl=""
    include-filter.config.required_header="Server-Agent=Communique-Dispatcher"
    include-filter.config.ignoreUrlParams="[]"
    include-filter.config.rewrite="{Boolean}true"
/>

Please find below the brief description of each OSGI config used above:

  • enabled – set it to true to enable SDI.
  • path – SDI configuration will be enabled only for this path.
  • resource-types – which components should be replaced with tags
  • include-type – type of include tag (Apache SSI, ESI or Javascript)
  • Apache SSI – Apache Server Side Includes

Apache HTTP Server is set up as a caching proxy in front of the AEM. This means that the include will be done by the http server and not by the sling engine.

  • ESI – Edge Site Includes

Edge Site Includes can be used as an alternative to SSI, it is evaluated by CDN. ESI has to have some proxy that is able to process its tags and often made available as part of CDN.

  • JavaScript – Ajax

Using JSI will replace dynamic components with ajax tags, so they are loaded by the browser. If included component has some JS code, it may not work properly, as it won’t be initialized immediately after a page is loaded.

  • Add comment – adds debug comment: <!– SDI include (path: %s, resourceType: %s) –> to every replaced component.
  • Filter selector – selector added to HTTP request for particular component and is used to get actual content.
  • TTL – time to live in seconds, set for rendered component. This property is supported for dispatcher version 4.1.11+
  • Required header – SDI will be enabled only if the configured header is present in the request. By default it’s Server-Agent=Communique-Dispatcher header, added by the AEM dispatcher. You may enter just the header name only or the name and the value split with =.
  • Ignore URL params – SDI normally skips requests containing any GET parameters. This option allows to set a list of parameters that should be ignored.
  • Include path rewriting — enable rewriting link (according to sling mappings) that is used for dynamic content including.

 

Step 3:

 

After completion of Step 1 and Step 2 on publishing instance, Dispatcher configurations need to be updated as explained below:

1. Include(If already present, make sure uncommented) the mod_include module in Apache Web server’s httpd.conf file:

LoadModule include_module modules/mod_include.so

2. Update virtual host configuration file

a. Find the following lines in the dispatcher.conf file

<IfModule dispatcher_module>
	SetHandler dispatcher-handler
</IfModule>

modify as below

<IfModule dispatcher_module>
	SetHandler dispatcher-handler
</IfModule>
SetOutputFilter INCLUDES

b. Add Includes to Options directive:

<VirtualHost *:80>
...
<Directory />
				...
Options FollowSymLinks Includes  
AllowOverride None
...
		<Directory>
		...
</VirtualHost>

3.  Update the httpd.conf to enable SDI.

a. Add “Includes” to Options directive to enable SSI includes used by Sling Dynamic Include

b. Specify what file types are to be processed by Includes filter.

   <Directory /mnt/var/www/html>	
      ...
      Options Indexes FollowSymLinks Includes  
	... 
      AddOutputFilter INCLUDES .html 
	AddOutputFilterByType INCLUDES text/plain text/html
      ...
   </Directory>

4. Update rules.any or dispatcher.any depending where the cache rules are defined for the publish instance.

/0008 {
    		/glob "*.nocache.html*"
    		/type "deny"
  	}

Make sure the selector ‘nocache’ used here is same as defined in OSGI config (include-filter.config.selector = ‘nocache’) explained in Step 2.

5. Restart the server using any of the below commands:

sudo apachectl restart OR sudo service httpd restart

 

Verification

After setting up the SDI, it’s time to verify the changes. Follow the below steps:

 

  1. Right click and open the page source on the webpage where the component is dynamically included.
  2. In the page source, search for SDI includes tag.
  3. A component configured for SDI will be replaced with SDI tags as shown below:

The post Sling Dynamic Include (SDI): Dynamically Include Page Components appeared first on Argil DX.

]]>
https://www.argildx.us/technology/sling-dynamic-include-sdi/feed/ 0
Permission Sensitive Caching (PSC) https://www.argildx.us/technology/permission-sensitive-caching-psc/ https://www.argildx.us/technology/permission-sensitive-caching-psc/#comments Wed, 16 Jan 2019 10:26:39 +0000 /?p=5703 In AEM, we have both secured pages as well as public pages. Dispatcher provides the capability to cache all the pages but dispatcher doesn’t know about secured or un-secured pages, so it serves all the pages to an Anonymous user. To get rid of this problem, dispatcher needs to know whether a page is to ... Read more

The post Permission Sensitive Caching (PSC) appeared first on Argil DX.

]]>
In AEM, we have both secured pages as well as public pages. Dispatcher provides the capability to cache all the pages but dispatcher doesn’t know about secured or un-secured pages, so it serves all the pages to an Anonymous user. To get rid of this problem, dispatcher needs to know whether a page is to be served to a particular user. In AEM, Permission Sensitive Caching(PSC) provides this functionality which enables you to cache secured pages. Dispatcher checks user’s access permissions for a page before displaying the cached page.

So, when any request comes to the dispatcher, it hits an AEM servlet to check the user permission.

 

Let’s elaborate PSC integration with AEM 6.4 and Dispatcher 2.4.

Step 1: Dispatcher configurations need to be updated as explained below:

a. Add this code in publish-farm :

/auth_checker
  {
  # request is sent to this URL with '?uri=<page>' appended
  /url "/content.pagePermission.getPermission"    
  # only the requested pages matching the filter section below are checked, all other pages get delivered unchecked
  /filter
    {
    /0000
      {
      /glob "*"
      /type "deny"
      }
    /0001
      {
      /glob "/content/we-retail/secure-pages/*.html"
      /type "allow"
      }
    }
  # any header line returned from the auth_checker's HEAD request matching the section below will be returned as well
  /headers
    {
    /0000
      {
      /glob "*"
      /type "deny"
      }
    /0001
      {
      /glob "Set-Cookie:*"
      /type "allow"
      }
    }
  }

Brief description about dispatcher configuration:

  • URL: The URL of the servlet that performs the security check.
  • filter: To specify specific folders on which permission sensitive caching is applied.
  • headers: Specifies the HTTP headers that the Authorization Servlet includes in the response.

b. Also, make sure allow Authorized is set to 1 under the cache configuration.

/cache
{
 ...
 allowAuthorized “1”   
 ...
}	

Note: Any page path which matches the PSC filters, the dispatcher will hit AEM servlet before serving the page from cache, so wisely define filters because network calls increase on each page hit.

 

Step 2: Now we must create a servlet in AEM which will check if the resource or page is authorized or not for the user who requests the web content and sends response Header.

Below is the Java Servlet to which dispatcher sends HEAD request :

import java.security.AccessControlException;
import javax.jcr.RepositoryException;
import javax.jcr.Session;
import org.apache.felix.scr.annotations.sling.SlingServlet;
import org.apache.sling.api.SlingHttpServletRequest;
import org.apache.sling.api.SlingHttpServletResponse;
import org.apache.sling.api.servlets.SlingSafeMethodsServlet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
 
/**
* This servlet will validate that the requested page uri is accessible or not and then accordingly set the response header.
*
*/
@Component( service = Servlet.class,
property = { sling.servlet.methods= "HEAD", sling.servlet.resourceTypes = "sling/servlet/default” sling.servlet.selectors = {"pagePermission"}, sling.servlet.extensions = {"getPermission"})
public class AuthcheckerServlet extends SlingSafeMethodsServlet {
 
  /** The Constant LOGGER. */
  private static final Logger logger = LoggerFactory.getLogger(AuthcheckerServlet.class);
 
  /**
   * Method to handle the HEAD request for the servlet.
   * 
   * @param request - The request object.
   * @param response - The response object.
   *
   */
  @Override
  public void doHead(SlingHttpServletRequest request, SlingHttpServletResponse response) {
      logger.debug("Start of doHead Method");
      // retrieve the requested URL
      String uri = request.getParameter("uri");
      uri = uri.replace(HTML, EMPTY);
      // obtain the session from the request
      Session session = request.getResourceResolver().adaptTo(javax.jcr.Session.class);
      if (session != null) {
        try {
     // perform the permissions check
        session.checkPermission(uri, Session.ACTION_READ);
        response.setStatus(SlingHttpServletResponse.SC_OK);
      } catch (AccessControlException | RepositoryException e) {
          response.setStatus(SlingHttpServletResponse.SC_FORBIDDEN);
        }
      }
      else {
        response.setStatus(SlingHttpServletResponse.SC_FORBIDDEN);
      }
      logger.debug("End of doHead Method"); 
  }
}

 

Step 3: Restart the dispatcher and you are all set up.

 

Verification

To check if the Permission sensitive caching is working or not, goto dispatcher.log file, this message must be present there:

AuthChecker: initialized with URL ‘configured_url‘.

 

To Check the AuthChecker Servlet response, hit the following curl command:

  1. Without Authentication

curl –head http://publishserver:port/content.pagePermission.getPermission?uri=/content/we-retail/secure-pages/pageName.html

Response:

HTTP/1.1 403 Forbidden
Date: Tue, 04 Sep 2018 09:38:31 GMT
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
Content-Length: 0

2. With Authentication

curl –head http://publishserver:port/content.pagePermission.getPermission?uri=/content/we-retail/secure-pages/pageName.html –user username: password

Response:

HTTP/1.1 200 OK
Date: Tue, 04 Sep 2018 09:42:19 GMT
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
Content-Length: 0

 

The post Permission Sensitive Caching (PSC) appeared first on Argil DX.

]]>
https://www.argildx.us/technology/permission-sensitive-caching-psc/feed/ 1