Simple case statement to render checks and x's to signify true or false. You can buy a vector image ($5-$20) or use microsoft word and snip the image.
case when ( 'primParentPrtner.Training_Compliance__c'==\"false\") then \"✅\" else \"❌\" end as 'icon'
0 Comments
Got a use-case from a cohort looking for help in creating a heat map that shows an employees allocation in hours for a project. In the mockup csv file I created for this POC, we can see that Dan is engaged in a project from Jan 2023 to July 2023 for 10 hours.... engaged in Feb 2024 to March 2024 for 20 hours and so on.....The challenge is that the data only provides the start date and end date which is not a problem for a 2 month engagement. However, Dan's hours need to be on the heatmap for the missing months! In above example he needs a plot for Feb 2023, Mar 2023, april 2023 ...up to July 2023. How? By using the fill() function and some string and data manipulation. The big secret is creating a string called 'concat' that embeds the needed info that will be propagated in the newly generated months by the fill() function. One concaenates thie start,enddates and allocation and name... then 'decodes' it in the next geneate by statements, calcualtes the epoch seconds,etc. A case statement then goes row-by-row to mark the 'busy?' row as true or false if the row is inside the project span. Code: q = load "DanCSV"; --code the concat string to contain important info q = foreach q generate q.'Emp' as 'emp',q.'Allocation',q.'Start_Date_Month',q.'Start_Date_Year',q.'Start_Date_Day',q.'End_Date','Start_Date'+'End_Date'+"!"+'Emp'+"!"+number_to_string('Allocation',"0") as 'concat'; --use the fill function to generate the missing dates q= fill q by (dateCols=('Start_Date_Year','Start_Date_Month','Start_Date_Day',"Y-M-D"), startDate="2023-01-01", endDate="2024-12-31",partition='concat'); --start to decode the concat by getting the Nameindex and allocIndex indices for later use q = foreach q generate 'concat','emp','Start_Date_Year'+"-"+'Start_Date_Month'+"-"+'Start_Date_Day' as 'TopDay',substr('concat',1,10)as 'ProjStart',substr('concat',11,10)as 'ProjEnd',index_of('concat',"!",1,1) as 'NameIndx',index_of('concat',"!",1,2) as 'AllocIndx'; --this 'unpacks'the concat string back into original components. q = foreach q generate 'concat','emp', 'TopDay', 'ProjStart', 'ProjEnd', 'NameIndx', 'AllocIndx','AllocIndx'-'NameIndx'-1 as 'NameLength'; --retrieve the indexes q = foreach q generate 'concat','emp', 'TopDay', 'ProjStart', 'ProjEnd', 'NameIndx', 'AllocIndx','NameLength',substr('concat','NameIndx'+1,'NameLength') as 'NewEmp',substr('concat','AllocIndx'+1) as 'NewAlloc'; ---surface the epoch secords for all the dates q = foreach q generate 'NewEmp', 'TopDay', 'ProjStart', 'ProjEnd', 'NewAlloc',date_to_epoch(toDate('TopDay',"yyyy-MM-dd")) as 'TopDaySec',date_to_epoch(toDate('ProjStart',"yyyy-MM-dd")) as 'ProjStartSec',date_to_epoch(toDate('ProjEnd',"yyyy-MM-dd")) as 'ProjEndSec',month_first_day(toDate('TopDay',"yyyy-MM-dd")) as 'monthFD'; --compare topday to start end dates and flag rows that are within span q = foreach q generate 'NewEmp', 'TopDay', 'ProjStart', 'ProjEnd', 'NewAlloc', 'TopDaySec','ProjStartSec','ProjEndSec', 'monthFD',case when ('TopDaySec'>= 'ProjStartSec' and 'TopDaySec'<= 'ProjEndSec') then "true" else "false" end as 'busy'; --show only busy rows. q2=filter q by 'busy'=="true"; q2 = foreach q2 generate 'NewEmp', 'TopDay', 'ProjStart', 'ProjEnd', 'NewAlloc', 'monthFD','busy'; q2=group q2 by ('NewEmp','monthFD'); q2 = foreach q2 generate 'NewEmp','monthFD' ,string_to_number(min('NewAlloc') ) as 'alloc'; User Story: As a user, I need to know tally any tasks (call, email, meeting, etc) that was done in a certain period. I also want to know whether the source of the email, call,etc was the oppty, account, contact or lead object.
Solution: Ingest the Task and/or Event objects and use the whoId and WhatId fields to join to the above 4. These 2 fields in the task and event are 'polymorphic'-- which means its purpose changes to what object it is connected to. (In object oriented programming, a command 'animal.move' is said to be polymorphic since I means crawl for a snake or fly for a bird.. but I digress). To this point, whatId can either augment to an account or opportunity... WhoId can be augmented to either a lead or a contact. More documentation here from saleForceBen... https://www.salesforceben.com/what-is-the-difference-between-whoid-and-whatid/ After joining the 4 objects into the task (left grain), we can then use a case statement in the recipe to label each row in task--whether the taks is from an oppty, acct, lead or contact. Here is the saql. case when WhoId is null and "opty.Id" is not null then 'Opty' when WhoId is null and "opty.Id" is null then 'Acct' when WhoId is not null and "Lead.Id" is not null then 'Lead' when WhoId is not null and "Lead.Id" is null then 'Contact' end One of the core value proposition of Data Cloud is data harmonization-- a fancy term for consolidating multiple profiles from disparate data sources into a single 'Unified Profile'. Why is this important? Clean data is foundational to effective and accurate AI endeavors. The training data for Machine Learning will be riddled with inaccurate samples, duplicate rows,etc. Simply put..bad data begets bad AI.
User Story: Company XYZ has an SFDC instance with 3 customers in Contacts -- William Hall (email:W_Hall@gmail.com), William Hall (email: WilliamHall3@aol.com) and Will Hall (email:W_Hall@gmail.com). Through proper configuration of Match and Reconciliation rules, Data Cloud (formerly known as Customer Data Platform or CDP) will be able to consolidate the 3 Mr. Halls into the cleaner version consisting of 2 William Halls -- the William Hall with the gmail address and the William Hall with the aol.com address. The match rule that will trigger the above harmonization is 'exact last name+fuzzy first name+exact email' . Additional rules can be layered to this such as 'exact frequent flyer miles' or 'exact driver license number',etc. to further 'unify' profiles-- more rules added means more consolidation (ie increase consolidation rate). The snips below illustrate this... 1st set of snips has 4 match rules -- Driver License (DL) OR car club OR exact last name+email OR frequent flyer. This set unified 200 profiles into 196 . The 2nd set added a 5th match rule 'Motor Club" which means if a person has the exact motor club Id in 1 or more objects involved in the harmonization, they are to be unified into 1 profile. Security Regime for a CRM Analytics Implementation (January 19, 2024)
User Story: • Imagine you are tasked with implementing a robust security strategy for a Salesforce CRM environment. The organization deals with sensitive customer data and requires a secure analytics solution that balances data accessibility with stringent security measures. Assignment Task - Share how you would approach the following: • Data Access Control. • Audit Trail and Monitoring • External Data Source Security. Proposed Solution: Data Access Control - can be achieved by utilizing varying methods or layers in the CRMA environment. It would start with app level security where CRMA asset access can be restricted to a certain group of users. As an example, if business unit “A” should only have access to 3 dashboards, the 3 dashboards (and their underlying datasets) should be saved in an app which is then shared to that group of users. The next aspect of CRMA security is object or field level security. This would involve modifying the object and field access of the integration user and the security user since any attempt to access object and fields not given permission would result in the dataflows failing. This layer can be implemented in place of or in combination with app access layer should the requirements call for object access with restrictions on certain fields for certain groups. The next layer of CRMA is utilizing SFDC’s sharing rules in combination with security predicates. The use of one or both layers would depend on the size of the enterprise and the types of objects to be secured (due to the limitations of the sharing rules). In addition, performance issues might need to be considered since there are overhead costs to using the sharing rules. In terms of security predicates, flatten transformations would be used to ingest the manager hierarchy, the role hierarchy, opportunity teams and other sharing hierarchies (ingested through csv’s) to enforce row-level security requirements. A sharing hierarchy dataset can also be curated. This dataset – which can be refreshed weekly contains the aforementioned hierarchies and ingested into new recipes – an efficient way to streamline the process and replace 4+ nodes with a single node. In addition to the layers above, a superuser category can also be created – achieved by 1) adding a ‘Dataset Access’ field in the user profile, set to let’s say “True” and 2) adding to the recipe transformations a constant ‘ViewAllFlag’ field set to “True” . This would enable a certain category of users to bypass the security restrictions and have complete access to the CRMA datasets/assets. Examples of these personas would be external ad-hoc CRMA developers, or senior people that manages business units not adequately expressed by existing role hierarchies. An example of a security predicate which makes rows available only to opportunity leaders, opportunity owners, account owners, managers of the owners, users belonging in the roles hierarchy and superusers. 'OpportunityId.Opportunity_Leader__c' == "$User.Id" || 'OwnerId.Id' == "$User.Id" || 'Account.OwnerId' == "$User.Id" || 'OwnerId.ManagerMulti' == "$User.Id" || 'OwnerId.UserRoleId.Roles' == "$User.UserRoleId" || 'ViewAllFlag' == "$User.Dataset_Access__c" Once all these layers of CRMA get implemented, the CRMA admin will need to test the layers by assuming different identities and checking if the asset access, object/field and row level controls work as defined. Audit Trail and Monitoring - Monitoring the security regime discussed would involve subscribing to external tools to track changes in the security predicates.This is so because security predicate changes are not logged in the SFDC audit log. * Change data capture (CDC) 3rd party tools are available which captures changes to SFDC data such as
*https://trailhead.salesforce.com/trailblazer-community/feed/0D54S00000HDtugSAD In addition to monitoring changes to the predicates, the sharing hierarchy dataset has to be refreshed periodically to track users that have been activated / deactivated in the user object. External Data Source Security - can be implemented by either pre-filtering the data rows before synching it into CRMA (using connectors) or post-filtering them after it gets synched. There are performance issues and generalizability of use factors to be considered in deciding which method to use.
One of the important benefits of Data Cloud is its ability to unify profiles the exist in multiple data sources. Using rulesets, a user can create match rules which act as criteria for deciding whether data from one object ought to be 'unified' with data from another object(s). (eg. row from marketing cloud consolidated with a row from service cloud into a new unified profile object). Many match rules can be created which, when chained together become a giant filter having 'or' logic. This means that if rows between objects satisfy any of the chained rules, then those rows are deemed to be a match. Imagine if there a a total of 500 rows from 3 objects and Data Cloud Admin determines that rows with matching last names with identical emails be deemed a match (criteria 1), then adds 2 more criterias-- say... same pasport numbers (Criteria 2) and same Drivers license numbers(Criteria 3)...after running the identity resolution process, it results in 500 rows being unified into 400 rows in the unified profile object.This results in a consolidation rate of 20% (1 - 400 / 500). During UAT, the end-users think that the 3 criterias were too aggressive in defining what constitutes a match, the Admin then can takes off the 2 criterias and find that 500 rows only gets unified into 490 rows which results in a 2% consolidation ( 1 - 490 / 500).Intuitively it makes sense...the less criterias, the stricter the rules, the lower the consolidation rate. The more criterias, the looser the rules (remember its using OR logic not AND) , the higher the consolidation rate.Snipped above are 2 rulesets--the one on the left only had 1 criteria, the one on the right had 4 criterias.
User Story: User needs to see how oppty pipeline differs between two dates. Step 1 involves creating a recipe to run once daily and snaphost the values of the oppty object.(or oli if need more details).. Step 2. Create a dashboard with 2 dates and render oppty by state using multi datastreams. Code below shows data stream and 2 queries which feed the pipeline snapshot dates.. ie today and 1 year back.
Results qry to calc 1 yr back: q= load "zds_completeOpptys_v2"; q = filter q by date('SnapshotStamp_Year', 'SnapshotStamp_Month', 'SnapshotStamp_Day') in ["366 days ago".."365 days ago"]; q= group q by 'SnapshotStamp'; q = foreach q generate 'SnapshotStamp' as 'sd', substr('SnapshotStamp',2,10) as 'sd2';q=order q by 'sd' desc; q=limit q 1; for today q = load "zds_completeOpptys_v2"; q = filter q by date('SnapshotStamp_Year', 'SnapshotStamp_Month', 'SnapshotStamp_Day') in ["current day".."current day"]; q= group q by 'SnapshotStamp'; q = foreach q generate 'SnapshotStamp' as 'topDay', substr('SnapshotStamp',2,10) as 'topDayStr'; q=limit q 1; comparos of 2 snap dates "query": "q = load \"zds_completeOpptys_v2\";\nq=filter q by 'Region' !=\"International\";\nq = filter q by !('AccountOwner' in [\"Integration User\",\"Integration User2\"]);\nq1 = filter q by {{column(timeStampText_4.selection,[\"Snapshot1\"]).asEquality('timeStampText')}};\nq1 = filter q1 by date('CloseDate_Year', 'CloseDate_Month', 'CloseDate_Day') in {{cell(static_2.selection,0,\"Valu\").asString()}};\nq2 = filter q by {{column(timeStampText_5.selection,[\"Snapshot2\"]).asEquality('timeStampText')}};\nq2 = filter q2 by date('CloseDate_Year', 'CloseDate_Month', 'CloseDate_Day') in {{cell(q_FY2_1.selection,0,\"Valu\").asString()}};\nresult = group q1 by '{{column(static_1.selection, [\"Valu\"]).asObject()}}' full, q2 by '{{column(static_1.selection, [\"Valu\"]).asObject()}}' ;\nresult = foreach result generate coalesce(q1.'{{column(static_1.selection, [\"Valu\"]).asObject()}}', q2.'{{column(static_1.selection, [\"Valu\"]).asObject()}}' ) as '{{column(static_1.selection, [\"Valu\"]).asObject()}}', round(sum(q1.'Amt'),0) as 'Pipeline1 Amt',round(sum(q2.'Amt'), 0) as 'Pipeline2 Amt', round((sum(q1.'Amt')- sum(q2.'Amt')), 0) as 'Difference';\nresult = order result by ('{{column(static_1.selection, [\"Valu\"]).asObject()}}'asc);\nq3 = filter q by {{column(timeStampText_4.selection,[\"Snapshot1\"]).asEquality('timeStampText')}};\nq3 = filter q3 by date('CloseDate_Year', 'CloseDate_Month', 'CloseDate_Day') in {{cell(static_2.selection,0,\"Valu\").asString()}};\nq4 = filter q by {{column(timeStampText_5.selection,[\"Snapshot2\"]).asEquality('timeStampText')}};\nq4 = filter q4 by date('CloseDate_Year', 'CloseDate_Month', 'CloseDate_Day') in {{cell(q_FY2_1.selection,0,\"Valu\").asString()}};\ntot = cogroup q3 by all full,q4 by all;\ntot = foreach tot generate \"--------- TOTALS --------- \" as '{{column(static_1.selection, [\"Valu\"]).asObject()}}' , round(sum(q3.'Amt'),0) as'Pipeline1 Amt', round(sum(q4.'Amt'),0) as 'Pipeline2 Amt', round((sum(q3.'Amt')- sum(q4.'Amt')), 0) as 'Difference';\nfinal =union result, tot;\n", q = load "x_pokeOppty";
q = foreach q generate q.'mv_prod.ProductCode' as 'mv_prod.ProductCode', q.'Gross_Margin_Dollars__c' as 'Gross_Margin_Dollars__c','Acct.CreatedDate' as 'Acct.CreatedDate',day_in_week(toDate('Acct.CreatedDate_sec_epoch')) as 'DayCreated','Acct.CreatedDate_Year' as 'CreatedDtYear'; q = foreach q generate 'mv_prod.ProductCode', 'Gross_Margin_Dollars__c','Acct.CreatedDate', 'DayCreated','CreatedDtYear',case when ('DayCreated'==1) then "Sunday"when ('DayCreated'==2) then "Monday" when ('DayCreated'==3) then "Tuesday" when ('DayCreated'==4) then "Wednesday" when ('DayCreated'==5) then "Thursday" when ('DayCreated'==6) then "Friday" when ('DayCreated'==7) then "Saturday" end as 'dayString'; q= group q by ('dayString','DayCreated','CreatedDtYear'); q=foreach q generate 'DayCreated' as 'DayCreated','dayString' as 'daystring', 'CreatedDtYear', count() as 'dayCount'; q=order q by 'DayCreated' ; Value Display sortpivots sort measures
1 User_Division__x Team | Owner | Region -- 2Initial Call Initial Call - -- 3Initial Call - Yes Schedule Meeting Initial Call - Yes Schedule Meeting - -- Here is a sample of a table widget with totals with bindings used to sort meaures or dimensions (since faceting is needed to facilitate interactivity of dashboards). Projections need their API names vs. string labeling. Snip is pseudocode(almost) q = load "Activities"; q_A = filter q by 'Type_of_Call__c' == "Proposal Revision" && 'Proposal_Meeting_Result__c' in ["Maybe - Revise Proposal", "No - Create New Activity", "Yes - Deal is Closed Won!"]; q_B = filter q by 'Type_of_Call__c' == "Proposal Revision" && 'Proposal_Meeting_Result__c' == "Yes - Deal is Closed Won!"; q_C = filter q by 'Type_of_Call__c' == "Initial Call" && 'Result_of_Call__c' in ["CFA, Successful", "No, Successful", "No, Unsuccessful", "Yes, Schedule Meeting"];q_A = group q_A by rollup('{{cell(static_3.selection, 0, "Value").asObject()}}'); q_A = order q_A by ('{{cell(static_3.selection, 0, "Value").asObject()}}' asc nulls first); q_B = group q_B by rollup('{{cell(static_3.selection, 0, "Value").asObject()}}'); q_B = order q_B by ('{{cell(static_3.selection, 0, "Value").asObject()}}' asc nulls first); q_C = group q_C by rollup('{{cell(static_3.selection, 0, "Value").asObject()}}'); q_C = order q_C by ('{{cell(static_3.selection, 0, "Value").asObject()}}' asc nulls first); result = group q_A by '{{cell(static_3.selection, 0, "Value").asObject()}}' full, q_B by '{{cell(static_3.selection, 0, "Value").asObject()}}' full, q_C by '{{cell(static_3.selection, 0, "Value").asObject()}}'; result = foreach result generate coalesce(q_A.'{{cell(static_3.selection, 0, "Value").asObject()}}', q_B.'{{cell(static_3.selection, 0, "Value").asObject()}}', q_C.'{{cell(static_3.selection, 0, "Value").asObject()}}', sum(q_A.'ActivityCount') as 'Proposal Revision', sum(q_B.'ActivityCount') as 'Proposal Revision Yes - Deal is Closed Won!', sum(q_C.'ActivityCount') as 'Initial Call', coalesce(grouping(q_A.'{{cell(static_3.selection, 0, "Value").asObject()}}'), grouping(q_B.'{{cell(static_3.selection, 0, "Value").asObject()}}'), grouping(q_C.'{{cell(static_3.selection, 0, "Value").asObject()}}'))) as 'grouping_{{cell(static_3.selection, 0, "Value").asObject()}}'; result = foreach result generate '{{cell(static_3.selection, 0, "Value").asObject()}}', 'Initial Call', 'Initial Call - Yes Schedule Meeting', 'Initial Call - Yes Schedule Meeting' / 'Initial Call' as 'Yes, Schedule Meeting', 'Presentation Meeting', 'Yes, Quote it - Presentation Meeting' , 'Yes, Quote it - Presentation Meeting' / 'Presentation Meeting' as 'Ratio - Yes Quote it','grouping_{{cell(static_3.selection, 0, "Value").asObject()}}'; summary = filter result by 'grouping_{{cell(static_3.selection, 0, "Value").asObject()}}' == 1; result = filter result by 'grouping_{{cell(static_3.selection, 0, "Value").asObject()}}' == 0; HERE IS SORTING BINDINGS THRU TOGGLES First binding'switch' {{cell(columnList_1.selection, 0, "sortPivots").asObject()}}result = order result by ('{{cell(columnList_1.selection, 0, "valu").asObject()}}' {{cell(sorter_1.selection, 0, "valu").asObject()}} nulls last); 2nd binding 'switch' {{cell(columnList_1.selection, 0, "sortMeasures").asObject()}} result = order result by ('{{cell(static_3.selection, 0, "Value").asObject()}}' {{cell(sorter_1.selection, 0, "valu").asObject()}} nulls last); result = union result, summary; SAQL resolves to …. q = load "0Fb5x000000ToKGCA0/0Fc5x000007ZFMjCAO"; q_A = filter q by 'Type_of_Call__c' == "Proposal Revision" && 'Proposal_Meeting_Result__c' in ["Maybe - Revise Proposal", "No - Create New Activity", "Yes - Deal is Closed Won!"]; q_B = filter q by 'Type_of_Call__c' == "Proposal Revision" && 'Proposal_Meeting_Result__c' == "Yes - Deal is Closed Won!"; q_C = filter q by 'Type_of_Call__c' == "Initial Call" && 'Result_of_Call__c' in ["CFA, Successful", "No, Successful", "No, Unsuccessful", "Yes, Schedule Meeting"]; q_A = group q_A by rollup('User_Division__c'); q_A = order q_A by ('User_Division__c' asc nulls first); q_B = group q_B by rollup('User_Division__c'); q_B = order q_B by ('User_Division__c' asc nulls first); q_C = group q_C by rollup('User_Division__c'); q_C = order q_C by ('User_Division__c' asc nulls first); result = group q_A by 'User_Division__c' full, q_B by 'User_Division__c' full, q_C by 'User_Division__c' ; result = foreach result generate c oalesce(q_A.'User_Division__c', q_B.'User_Division__c', q_C.'User_Division__c') as 'User_Division__c', sum(q_A.'ActivityCount') as 'Proposal Revision', sum(q_B.'ActivityCount') as 'Proposal Revision Yes - Deal is Closed Won!', sum(q_C.'ActivityCount') as 'Initial Call', coalesce(grouping(q_A.'User_Division__c'), grouping(q_B.'User_Division__c'), grouping(q_C.'User_Division__c')) as 'grouping_User_Division__c'; result = foreach result generate 'User_Division__c', 'Initial Call', 'Initial Call - Yes Schedule Meeting', 'Initial Call - Yes Schedule Meeting' / 'Initial Call' as 'Yes, Schedule Meeting'…etc ,'grouping_User_Division__c'; summary = filter result by 'grouping_User_Division__c' == 1; result = filter result by 'grouping_User_Division__c' == 0; --result = order result by ('User_Division__x' asc nulls last); result = order result by ('User_Division__c' asc nulls last); result = union result, summary; |
|